Next Article in Journal
Adding a Piece to the Puzzle? The Allocation of Figurative Language Comprehension into the CHC Model of Cognitive Abilities
Previous Article in Journal
“Show Me What You Got”: The Nomological Network of the Ability to Pose Facial Emotion Expressions
Previous Article in Special Issue
The Development of Intuitive and Analytic Thinking in Autism: The Case of Cognitive Reflection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Progressing the Development of a Collaborative Metareasoning Framework: Prospects and Challenges

School of Psychology & Humanities, University of Central Lancashire, Preston PR1 2HE, UK
*
Author to whom correspondence should be addressed.
J. Intell. 2024, 12(3), 28; https://doi.org/10.3390/jintelligence12030028
Submission received: 12 July 2023 / Revised: 19 January 2024 / Accepted: 13 February 2024 / Published: 1 March 2024
(This article belongs to the Special Issue Metareasoning: Theoretical and Methodological Developments)

Abstract

:
Metareasoning refers to processes that monitor and control ongoing thinking and reasoning. The “metareasoning framework” that was established in the literature in 2017 has been useful in explaining how monitoring processes during reasoning are sensitive to an individual’s fluctuating feelings of certainty and uncertainty. The framework was developed to capture metareasoning at an individual level. It does not capture metareasoning during collaborative activities. We argue this is significant, given the many domains in which team-based reasoning is critical, including design, innovation, process control, defence and security. Currently, there is no conceptual framework that addresses the nature of collaborative metareasoning in these kinds of domains. We advance a framework of collaborative metareasoning that develops an understanding of how teams respond to the demands and opportunities of the task at hand, as well as to the demands and opportunities afforded by interlocuters who have different perspectives, knowledge, skills and experiences. We point to the importance of a tripartite distinction between “self-monitoring”, “other monitoring” and “joint monitoring”. We also highlight a parallel distinction between “self-focused control”, “other-focused control” and “joint control”. In elaborating upon these distinctions, we discuss the prospects for developing a comprehensive collaborative metareasoning framework with a unique focus on language as a measure of both uncertainty and misalignment.

1. Introduction

The concept of “metareasoning” refers to the metacognitive processes that monitor and control the core cognitive operations associated with reasoning, which are often referred to as “object-level” processes (cf. Nelson and Narens 1990). These processes include those focused on interpreting and understanding available information as well as drawing inferences from it and making decisions about courses of actions to take (see Ackerman and Thompson 2017, 2018). Object-level processes can be distinguished from monitoring and control processes that function at a metalevel. At this metalevel, continual monitoring is essential to evaluate the effectiveness of object-level processes in progressing toward good outcomes, including sound inferences, and convincing decisions. Metacognitive monitoring is usually experienced subjectively in terms of one’s awareness of a shifting state of certainty regarding how effectively a process is unfolding, as well as one’s perception of how good a resulting outcome seems to be (Ackerman and Thompson 2017, 2018).
Like monitoring processes, control processes similarly function at a metalevel, but serve different fundamental purposes, including: (i) the allocation of essential cognitive resources, such as attention and working memory, to object-level processes; (ii) the termination of object-level processing that is failing to deliver a suitable outcome; (iii) the initiation of new object-level processing when it seems necessary to do so. It is also important to note that, during any reasoning task, there will be a continual interaction between monitoring and control processes to ensure that object-level cognition is maintained and progresses until an outcome is reached. If people feel confident about the efficacy of their object-level processing, then they will be likely to continue pursuing a particular course of action, whereas, if they reach a point where they have low confidence in what they are doing, then they might change their current strategy, ask for help or decide to give up on the task entirely (e.g., see Law et al. 2022 for an in-depth discussion of people’s giving-up behaviour on reasoning tasks).
The “metareasoning framework” that has been developed by Ackerman and Thompson (2017, 2018) represents a key milestone in recent efforts to explain many findings relating to the processes that monitor and control object-level reasoning. This metareasoning framework captures the way in which monitoring processes are sensitive to fluctuating feelings of certainty and uncertainty that are experienced by a reasoner during task performance, as well as the way in which different control processes are evoked in response to heightened levels of certainty or uncertainty. It is important to emphasise, however, that this framework was developed to explain metareasoning as it arises in individual reasoners rather than in groups of reasoners who are collaborating to undertake a joint reasoning task, as might arise, for example, in team problem solving or decision making. The focus of the metareasoning framework on individuals clearly raises the interesting question of whether the framework—or at least some key aspects of it—can be extended to capture the nature of collaborative metareasoning. Addressing this question provides a key motivating rationale for the present paper, which aims to explore the prospects and challenges for progressing the development of a collaborative metareasoning framework.
We begin by highlighting the importance of understanding collaborative metareasoning for theoretical advancement, especially in relation to the monitoring and control processes that arise during team reasoning, problem solving and decision making in real-world contexts. We next overview Ackerman and Thompson’s (2017, 2018) influential metareasoning framework, which is focused on individual reasoners, and we consider potential ways in which this framework can inform an understanding of collaborative metareasoning. In overviewing Ackerman and Thompson’s (2017, 2018) conceptual ideas, we emphasise how their framework focuses primarily on self-report measures of fluctuating levels of certainty and uncertainty. When it comes to an analysis of collaborative metareasoning, however, we argue that there is an opportunity to gain more direct access to monitoring and control processes through a detailed analysis of the individual and joint language that arises in team reasoning contexts. As such, our paper progresses toward a detailed consideration of current research on the metacognitive monitoring and control processes that are critical to the occurrence of effective dialogue during joint action (Pickering and Garrod 2021; see also Gandolfi et al. 2023). We argue that an analysis of team dialogue has the potential to pave the way toward a rich understanding of collaborative metareasoning that has so far been starkly absent in the literature.
Throughout our paper, we continually highlight an important tripartite distinction that was articulated in the context of metareasoning by Richardson et al. (2024; see also Pickering and Garrod 2021) between “self-monitoring” (i.e., an individual’s perception of their own performance), “other monitoring” (i.e., an individual’s perception of the performance of others) and “joint monitoring” (i.e., the unified perception of collective performance). Furthermore, in this paper, we also introduce a parallel tripartite distinction that relates to control processes, which is captured by the notions of “self-focused control” (i.e., an individual’s decisions about how to progress or terminate their own reasoning), “other-focused control” (i.e., an individual’s decisions about how to control the performance of others) and “joint control” (i.e., the unified control of decisions regarding how to advance or terminate collective performance). Our detailed discussion of these distinctions culminates in a consideration of the opportunities for developing a comprehensive collaborative metareasoning framework and the associated challenges.

2. The Importance of Understanding Collaborative Metareasoning

Most existing research on metacognitive monitoring and control has been undertaken with the aim of enhancing educational outcomes (e.g., Cromley and Kunze 2020; Hacker et al. 2009; Perry et al. 2019) and has, therefore, focused on object-level cognition that is related to comprehending and remembering in learning contexts. There has, however, been a limited but nevertheless long-standing interest in metacognitive monitoring and control processes in other areas of research, particularly in situations involving reasoning (e.g., Ackerman and Beller 2017; Quayle and Ball 2000; Thompson et al. 2011, 2013) and problem solving (e.g., Ackerman 2014; Ackerman and Zalmanov 2012; Metcalfe and Wiebe 1987; Topolinski and Reber 2010). Although such research has tended to be almost exclusively laboratory-based, more recent studies have broadened their focus to investigate metareasoning outside of the laboratory. One example is a study by Pervin et al. (2015) that analysed an extensive set of tweets to reveal how the use of hashtags in Twitter appears to be driven by an individual’s metacognitive experiences relating to cognitive load and confusion. Another example is a study by Roberts (2017), which used a community sample to examine the role of metacognitive monitoring and control in the acquisition of skills relating to effective web search and evaluative reasoning. Yet another example is a study by Zion et al. (2015) examining the effects of metacognitive support on metareasoning within an online discussion forum in the context of computer-supported inquiry.
Despite such exceptions, however, there remains a paucity of studies of metareasoning in relation to individuals working in real-world situations, and this dearth of research is likewise seen in relation to metareasoning in collaborating teams engaged in professional work-based practices. The relative absence of studies examining the nature of metareasoning processes during collaboration is unfortunate given that effective team-based performance is so critical in many domains. A few examples include contexts relating to design, innovation, medical diagnosis, security, defence and emergency response.
These latter domains all depend upon collaborative problem solving, reasoning and decision making operating effectively to achieve shared goals. For example, in high-risk defence and security situations, a failure to coordinate team decision making can have highly negative downstream consequences, including loss of life and damage to infrastructure. Recently, the media reported on inherent inadequacies in the response by emergency teams during the 2017 Manchester Arena attack. Three key failures were identified: (i) communication was ineffective between teams; (ii) coordination both within and between teams was weak, as reflected in a failure to identify priorities and resources; (iii) shared situational awareness was poor in that plans of action were not clearly understood. These failures were all ones relating to inadequate metareasoning. It has been argued (see Saunders 2022) that, with better team collaboration, the negative outcome of the Manchester Arena attack could have been reduced.
In emergency response situations such as those arising in the Manchester Arena attack, metacognitive monitoring processes need to be attuned to the subtleties and complexities of uncertainty at both a personal and interpersonal level. In the latter case, uncertainty will typically manifest in the language that arises in team dialogue and cross-team communication, such as the use of tentative language and hedge words (e.g., “maybe”, perhaps” and “possibly”). Metacognitive control processes, on the other hand, need to affect dynamic strategy change in a coordinated manner (i.e., mutually agreed between members, either tacitly or explicitly) to ensure that successful decisions are made. Within this complex, dynamic communication context there will also be a variety of situational factors that are likely to have an impact on object-level and metalevel processing at both an interpersonal and intrapersonal level, such as whether team members get on well with one another and whether collaborators are perceived to be competent. Recent studies of team-based emergency responding have corroborated the critical role of metacognitive monitoring and control in ensuring effective situation awareness between team members and enhanced team decision making (see Hamilton et al. 2017).
The example of decision making in the context of teams engaged in emergency responding serves to illustrate the potentially critical role played by metareasoning during real-world collaboration. The same kinds of monitoring and control processes that arise in team-based emergency responding will presumably also be involved in any jointly executed activity that involves the attainment of some common goal, whether this involves making a shared decision regarding a course of action to take or generating an agreed approach concerning how to solve a complex problem. Nevertheless, a clear conceptualisation of the metacognitive monitoring and control processes that underpin collaborative problem solving, reasoning and decision making is absent from the literature and, we contend, is in pressing need of development. To advance the formulation of such a collaborative metareasoning framework, we next review the existing literature on metareasoning at the level of the individual, before then taking this useful body of knowledge forward and extending it to contexts involving collaboration.

3. The Metareasoning Framework

The metareasoning framework, proposed by Ackerman and Thompson (2017, 2018; see Figure 1), has sparked considerable research interest in relation to metacognitive monitoring and control processes in domains that involve problem solving, reasoning and decision making. Figure 1 displays the approximate time course of object-level reasoning as well as corresponding metareasoning processes (Ackerman and Thompson 2017, 2018). The left side of the figure represents the object-level processes involved in reasoning, whilst the middle column depicts the associated monitoring processes. Monitoring processes reflect a reasoner’s subjective evaluation of the probability of success or failure in relation to a given task or problem (e.g., their confidence in the unfolding process), which can occur before, during or after object-level processing.
Crucially, monitoring processes continually track fluctuating levels of certainty and uncertainty related to ongoing task performance or solution success. Within this metareasoning framework, monitoring processes are viewed as happening in the background and as having the capacity to “trigger” control processes. The right-hand column of Figure 1 depicts such control processes, which serve to allocate and redistribute resources in response to the outcome of ongoing monitoring. For example, if intermediate confidence is high, individuals will be likely to continue with their current course of action. In contrast, if they experience low–intermediate confidence or a feeling of uncertainty, they may activate control processes to switch their current strategy or, alternatively, they may give up.
The object-level processes that are shown in Figure 1 include ones that are associated with problem understanding and goal identification, as well as ones that generate an initial, autonomous response to the task at hand or that involve reasoning about the task analytically. Moreover, the object-level reasoning that is portrayed in Figure 1, whereby intuitive processing is followed by analytic processing, aligns with a “dual-process” architecture that is based on “default-interventionist” principles (Evans and Stanovich 2013a, 2013b). According to such an architecture, reasoning is considered to involve two qualitatively distinct types of processes, referred to as Type 1 and Type 2. Type 1 processes are viewed as being intuitive, heuristic and associative in nature, and are defined in terms of being relatively undemanding of working-memory resources as well as autonomous (running to completion whenever they are cued). Correlated features of Type 1 processes are that they tend to be high-capacity, rapid, nonconscious and capable of running in parallel. In contrast, Type 2 processes are reflective, deliberate, analytic and controlled, and are defined in terms of requiring working memory resources and having a focus on hypothetical thinking. Correlated features of Type 2 processes include their tendency to be slow, capacity-limited, serial and conscious. Furthermore, Type 2 processes are less prone to biases in comparison to Type 1 processes, although they are not invulnerable to them (e.g., see Evans 2018; Evans and Stanovich 2013a, 2013b), which may arise, for example, from the application of inadequate or inappropriate analytic operations, referred to as “defective mindware” (e.g., Stanovich 2018).
Assuming a default-interventionist version of the metareasoning framework, one key function of metacognitive monitoring is to trigger a strategic shift from default Type 1 to analytic Type 2 processing so that analytic processes can intervene to determine the accuracy of the default response. To illustrate this shift from Type 1 to Type 2 processing, consider the bat-and-ball problem that is one of the items in the Cognitive Reflection Test (Frederick 2005; Kahneman and Frederick 2002). This problem reads as follows: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?” This problem might seem easy at first sight, yet many highly intelligent individuals generate an incorrect solution (e.g., Campitelli and Gerrans 2014; Pennycook et al. 2016; Stupple et al. 2017). Incorrect answers are attributed to a dominant, intuitive Type 1 response, whereby the answer that comes to mind rapidly and easily (i.e., 10 cents) is, in fact, wrong. When producing intuitive but incorrect responses, individuals may experience a low feeling of rightness (Thompson et al. 2011, 2013; see also Bago and De Neys 2017), which for some people will trigger a shift to Type 2 analytic reasoning, such that they might then be able to determine the correct answer.
Although Ackerman and Thompson’s (2017, 2018) metareasoning framework is couched in dual-process terms that capture default-interventionist principles, they stress that the logic of monitoring processes, such as those that register a “feeling of rightness” (see Figure 1), also extends to single-process reasoning theories that do not propose two types of processing (e.g., Kruglanski and Gigerenzer’s 2011, “Unified Theory of Judgment”). Moreover, the logic also pertains to theories of reasoning that propose the existence of multiple, parallel processes rather than sequential ones (e.g., see Bago and De Neys 2017). Whatever model of object-level reasoning one favours, it remains important to understand when, why and how people engage in more deliberate, reflective thinking. This, in turn, speaks to the importance of gaining insight into the way that people monitor rapid, initial answers, regardless of the type of reasoning mechanisms that are proposed to underlie their generation. The flexibility of the metareasoning framework to accommodate different theoretical perspectives on object-level reasoning is certainly a key strength.

4. Research Questions in Metareasoning Research

Ackerman and Thompson’s (2017, 2018) metareasoning framework provides a rich and productive theoretical foundation for research on reasoning and metareasoning. Nevertheless, the framework is primarily just a starting point for further studies and conceptual developments in this area. The need for an extensive amount of further research was directly acknowledged by Ackerman and Thompson (2017), who presented a series of research questions that are now increasingly being addressed by several international teams. Among these research questions are ones that are concerned with whether reasoners have a degree of insight into the sources of certainty that affect their judgments, whether metareasoning processes are shaped by culture and whether reasoning performance can be enhanced through interventions that are targeted at improving metareasoning skills.
As we have also noted above, a critical area of development for the metareasoning framework relates to its extension to capture collaborative metareasoning rather than just individual metareasoning. Little research has been conducted on this topic to date, leaving many questions wide open for further research and conceptual advancement. However, before we move on to explore some of the relevant challenges and opportunities relating to the development of an understanding of collaborative metareasoning, we first describe some important ways in which Ackerman and Thompson’s current framework has recently been extended. Doing this will allow us to introduce important ideas that will help inform our subsequent discussion of the complex issues that a collaborative metareasoning framework will need to tackle if it is to provide even the semblance of an account of how metareasoning operates in team contexts.

4.1. The Underpinning Basis of Metacognitive Certainty and Uncertainty

One critically important research question that needs to be addressed for an advancement of metareasoning—whether in situations that involve individual or team reasoning—concerns the underpinning basis of the fluctuating feelings of certainty and uncertainty that are experienced during task-based processing. One of the most pervasive cues that has been suggested to elicit such feelings is the perceived ease of task-oriented processing, which is a cue that is typically referred to as “processing fluency” (e.g., Alter and Oppenheimer 2009; Unkelbach and Greifeneder 2013). In essence, an answer or solution that comes to mind quickly and easily (i.e., with high processing fluency) gives rise to a strong feeling of rightness as well as to a heightened assessment of final confidence (Ackerman and Zalmanov 2012; Thompson and Morsanyi 2012; Thompson et al. 2013).
Processing fluency is usually indexed as the time that has elapsed from when a problem is displayed until a participant provides their response, with reasoning and problem-solving tasks typically showing a negative correlation between response time and metacognitive confidence judgments, reflecting the fact that easy items are processed more quickly than difficult items (e.g., Ackerman 2014; Ackerman and Zalmanov 2012; Baars et al. 2013). The influence of processing fluency on confidence has also been observed in studies of metacognition and memory. For example, Undorf and Erdfelder (2015) found that processing fluency—measured in terms of self-paced study time for to-be-remembered items—partially mediated the effect of item ease/difficulty on the judged probability of recalling the target (indexing “judgment of learning”). It is also important to note that the frequently observed association between processing fluency and confidence seems not merely to be correlational, but rather reflects a causal mechanism, whereby the ease of information processing directly induces a high degree of confidence in the correctness of a response (e.g., Topolinski and Reber 2010).
Other recent research on people’s metacognitive judgments in relation to memory tasks has been influential in shedding further light on the sources of information that people use in addition to processing fluency when making metacognitive judgments. This research has been especially valuable in revealing how people appear to integrate multiple cues that stem from the same stimulus item when making judgments of learning (e.g., Undorf and Bröder 2021; Undorf et al. 2018). In such studies, Brunswik’s lens model (e.g., see Kaufmann 2022) has been used to assess the overall validity of confidence judgments based on a combined set of cues. As Ackerman (2023) notes, however, this research leaves open important questions regarding the relative weights of such cues and how these weights change across task designs (e.g., different instructions) and population characteristics (e.g., different levels of prior knowledge). Ackerman (2023) presents a novel statistical approach, the “Bird’s-Eye View of Cue Integration” (BEVoCI) methodology, which she argues can reveal not only the cues that determine metacognitive judgments and task success, but also the relative weights of these cues and their malleability across task designs and populations.
Ackerman’s (2023) experiments using the BEVoCI method provide a wealth of findings regarding the multiplicity of weighted cues that underpin confidence judgments and solution accuracy in problem solving as well as the way in which these cues can dissociate in determining these outcome measures. Ackerman (2023) reports that people’s judgments of confidence are consistently oversensitive to processing fluency (as indexed by response time), which is a phenomenon that is well-established in the metareasoning literature (e.g., Ackerman and Zalmanov 2012). We see considerable value in cue-integration methods for advancing an understanding of the information that people draw upon when providing confidence judgments regarding final solutions to reasoning problems. Nevertheless, whether such methods can also be successfully extended to predict the cues that underpin judgments of intermediate confidence, such as people’s feeling of rightness, is yet to be determined.

4.2. Methods for Eliciting Judgments of Metacognitive Certainty and Uncertainty

A further, important research question concerning metareasoning in both individual and team contexts relates less to the issue of the cues that drive confidence judgments, and more to the question of how best to elicit confidence judgments from participants in the first place. As can be seen in Figure 1, confidence judgment can be elicited from participants at various time points, including, for example, initial judgments of solvability, intermediate feelings of rightness or confidence and final judgments of confidence or solvability. Our primary focus in this paper is on the best way to elicit intermediate confidence judgments at various points during the reasoning process, as this represents a critical measurement issue for understanding metacognitive monitoring in both individual and team reasoning contexts.
In problem-solving research, the traditional method that is used to obtain ephemeral, subjective judgments of certainty from a reasoner necessitates intermittently probing them for a rating of their current degree of confidence regarding the likelihood of finding a solution to a given problem. In research with individual problem solvers, this kind of metacognitive probe has often been described as indexing a reasoner’s current “feeling of warmth”; that is, their sense that a solution is imminent (Metcalfe 1986; Metcalfe and Wiebe 1987; see also Hedne et al. 2016; Kizilirmak et al. 2018; Laukkonen and Tangen 2018). However, translating a fleeting feeling of warmth into a response on a numerical or categorical scale is a challenging requirement for participants who are midtask, and who therefore have simultaneously to maintain primary task performance while also making known their current metacognitions about likely task success.
It is also important to consider the possibility that the temporary cessation of primary task processing while generating a metacognitive judgment might have a reactive effect on ongoing object-level reasoning, changing its natural dynamics, trajectory, and outcome. There is only a relatively limited body of experimental evidence that has directly addressed the reactive effect of probing confidence on individual object-level reasoning, with much of the research tending to examine the reactivity of eliciting retrospective confidence judgments on decision time and performance accuracy in studies involving repeated trials. Intriguingly, the relevant studies that have been conducted to date provide evidence both for the idea that metacognitive confidence judgments are reactive (e.g., Baranski and Petrusic 2001; Bonder and Gopher 2019; Double and Birney 2017, 2018, 2019a; Lei et al. 2020; Petrusic and Baranski 2003; Schoenherr et al. 2010), as well as against this idea (e.g., Ackerman 2014; Ackerman and Goldsmith 2008; Ackerman et al. 2020).
We do not wish to be drawn into a detailed consideration of the potential reasons for the discrepant findings in the extant literature regarding the reactivity of confidence judgments, as the matter seems to be a long way from being resolved. Much more empirical work and conceptual development will be required to understand the conditions under which reactivity arises and the severity of its impact on performance, whether assessed in terms of processing time or outcome quality (for relevant discussion see Double et al. 2018; Double and Birney 2019b). In the absence of a solid empirical and theoretical foundation from which to make definitive predictions as to when and why some metareasoning studies show reactivity whilst others do not, we would argue that it seems judicious to conduct a parallel line of research that measures confidence using alternative methods. We also contend that this argument pertains as much, if not more, to team-based, rather than individual, reasoning situations. This is because reactivity arising from continually interrupting team members to elicit confidence judgments during their ongoing collaboration could impact not only the object-level processing of individual team members but also the whole team dynamic.
What, though, might alternative methods for eliciting confidence judgments entail such that they can provide a nonreactive measure of fluctuating certainty and uncertainty as an individual or team progresses from initial problem understanding toward final solution generation? In individual reasoning contexts, the use of think-aloud methods (e.g., Ericsson and Simon 1980), where solitary participants verbalise whatever is currently passing through their minds, may seem like a way forward to obtaining a nonreactive, dynamic index of fluctuating levels of confidence. This is because spoken language can provide an excellent source of rich information about a person’s changing states of uncertainty, which can be manifest through their use of hedge words (e.g., “maybe”, “perhaps” and “possibly”). Of course, the think-aloud method cannot be deployed to assess fluctuating uncertainty during team-based reasoning, as it is a technique that is only appropriate for use with individual reasoners, which immediately limits its utility. However, it may even be the case that the method is ill-suited to providing a valid, dynamic index of uncertainty in individual reasoning, given long-standing concerns about the potential for thinking aloud itself to have a reactive effect on object-level reasoning, potentially changing natural processing in profound ways (e.g., see Godfroid and Spino 2015).
The reactive effect of thinking aloud is exemplified by the phenomenon of “verbal overshadowing”. This arises when participants are asked to verbalise their thoughts while attempting problems whose solution discovery benefits from restructuring processes that give rise to feelings of “insight”, with these restructuring processes operating primarily at an unconscious level. Pioneering research by Schooler et al. (1993; see also Schooler and Melcher 1995) revealed that thinking aloud hindered the attainment of insightful solutions via unconscious restructuring processes. This effect appears to arise because the request to think aloud diverts the problem solver’s attention toward strongly activated and obvious aspects of the problem that can easily be verbalised but are irrelevant to its solution (e.g., Bowden et al. 2005). In addition, more weakly activated information that is critical for solution success, but which resides at a level below awareness, is unable to enter consciousness because it is blocked or overshadowed by the stronger and reportable—albeit misdirected—information (cf. Ball et al. 2015; Kershaw and Ohlsson 2004; Siegler 2000). Although this verbal overshadowing effect is not always found in studies of insight problem solving, Ball et al. (2015) have suggested that occasional failures to replicate may be attributable to methodological differences between studies.
Considering the potential reactivity that can be engendered through the deployment of the think-aloud method in metareasoning research, the question arises as to whether alternative and less reactive techniques are available to detect fluctuating states of uncertainty during ongoing reasoning. Some possible options that come to mind include the use of eye-gaze tracking (e.g., Ball 2013), pupillometry (e.g., Mathôt and Vilotijević 2022) and eye-blink rate (Paprocki and Lenskiy 2017), as well as physiological measures, such as skin conductance (Figner et al. 2019) and heart-rate variability (Forte et al. 2019). The challenge with these methods, however, is that they have not traditionally been used to pinpoint states of uncertainty during task performance, instead being more closely associated with measures of fluctuating cognitive workload, arousal, stress and fatigue. It may well be that these methods can be deployed in a way that can index states of uncertainty during reasoning, but we are not aware of research that has attempted to do this to date, and it might be that the challenges to do so are simply insurmountable.
Recently, however, one interesting method that has been successfully deployed to measure continuous changes in the feeling of warmth states during problem solving involves the use of a so-called “dynamometer” (Laukkonen et al. 2021). This involves acquiring a continuous measure of fluctuating hand-grip strength from participants who are instructed to use grip intensity to indicate changing feelings of perceived progress toward a solution to a problem that can typically be solved through an unconscious restructuring process. In Laukkonen et al.’s (2021) study, participants were instructed to squeeze the dynamometer more strongly to convey greater perceived progress, and they were also asked to give the dynamometer a full-strength squeeze if they had found the solution via insight and experienced an Aha! moment, or to release their grip quickly if they had reached the solution without an Aha! moment.
The dynamometer allowed Laukkonen et al. (2021) to acquire multiple data points per second in real-time in relation to the onset of insight experiences, additionally enabling them to map such metareasoning data to other measures (i.e., solution accuracy and solution confidence) to investigate convergent validity, which was found to be high. For example, “spikes” in the dynamometer converged with participants verbally reporting after the problem-solving trial that they had experienced an Aha! moment. We note, however, that Laukkonen et al. (2021) did not directly test for an absence of reactivity on problem-solving performance arising from the deployment of the dynamometer, which is a key limitation of their study. Nevertheless, they confirm that problem-solving success rates in their study were comparable or better than those observed for similar tasks in other studies of insight problem solving (i.e., Salvi et al. 2016; Webb et al. 2016, 2018), supporting an apparent lack of reactivity arising from participants having to use the dynamometer to register feelings of progress.
Overall, Laukkonen et al. (2021) suggest that the dynamometer may be a useful tool in any context where researchers are interested in continually measuring metacognition, such as in situations that involve problem solving and insight. The nonverbal nature of the dynamometer presumably makes it far less likely that it will interfere with the demands of primary task processing, such that it should have minimal reactive impact on natural object-level reasoning. In addition, the technique could presumably be utilised with multiple, collaborating problem solvers (each using a dynamometer), although the use of a hand-held device certainly limits the feasibility of it being deployed anywhere other than in relatively simple individual or collaborative reasoning and problem-solving situations. More complex problems of the type that arise in real-world contexts would require free hand movement for activities such as gesturing, writing and collaborative interaction.
Notwithstanding the potential value of deploying dynamometers to tap into people’s intermediate uncertainty in collaborative situations, we would argue that the most straightforward yet informative way to obtain a valid, nonreactive measure of fluctuating states of uncertainty during collaborative task performance involves analysing the dynamic use of language by team members. For example, tracking and analysing the dialogue arising between team members working on a joint task could provide an exciting window into moments of both individual uncertainty as well as emergent uncertainty at the level of the whole team. We will consider the value of language markers of metacognitive states in the remaining sections of this paper, as we reflect more deeply on the potential to develop a collaborative metareasoning framework that captures the nature of team-based metacognitive monitoring and control.

5. Metareasoning in Teams

As discussed above, we contend that one of the most striking and important limitations of the existing metareasoning framework relates to its sole focus on individuals. In this respect, the framework, as currently formulated, gives no consideration to the fact that much real-world reasoning takes place in situations that involve others, such as reasoning that is in the service of cooperative or competitive goals, which themselves are underpinned by many processes, including coordination, conflict resolution, deception, argumentation and persuasion. Indeed, the persuasive function of everyday reasoning is a central tenet of the theoretical approach advanced by Mercier (2016), which is focused on the critical role that is played by argumentation in social communication. Mercier (2016; see also Mercier and Sperber 2011) argues that human reasoning has evolved to enable people to devise arguments and justifications such that they can reap the benefits that come from persuading others. However, the issue of how metareasoning might function in situations that relate to persuasion does not appear to have been considered in any detail to date.
Along with situations that involve persuading others, another context in which reasoning does not operate in isolated individuals is that which is concerned with the attainment of shared or mutual goals, such as the achievement of coordinated joint action or the generation of solutions to complex problems that require collaboration between members of heterogeneous teams. Examples of such team-based reasoning and problem-solving situations are numerous in real-world professional practice. Many of these situations are focused on creative endeavours, such as design, innovation, entrepreneurship, advertising and scientific discovery, yet there are other contexts that are also important, as we have mentioned earlier, such as ones relating to defence, security and emergency response. Overall, there remains a dearth of research that has directly investigated metareasoning in these various kinds of collaborative contexts, such that our current understanding of the nature and operation of metareasoning processes in real-world collaborative activities is highly inadequate.
Notwithstanding the generally limited empirically based understanding that exists of collaborative metareasoning, we note that one of the few team-based domains that has been subjected to at least some degree of investigation from a metareasoning perspective is that of real-world design, where several studies have examined the monitoring and control processes that arise when designers are developing creative concepts for new products (for reviews of this literature, see Ball and Christensen 2019, and Richardson et al. 2024). A striking observation from this latter body of research relates to the way in which changes in processing are found to co-occur with the appearance in dialogue of hedge words that reflect uncertainty among team members. For example, when faced with uncertainty, designers often appear to be triggered to engage in “analogical reasoning”, during which they draw upon conceptual ideas from a domain that is different to that of the problem focus and map these ideas across to the current domain (Ball and Christensen 2009; Ball et al. 2010). Likewise, uncertainty that arises in design contexts also seems to evoke “mental simulation”, whereby designers enact or “run” a sequence of interdependent events in a dynamic mental model to determine cause–effect relationships (e.g., between solution components) to predict likely outcomes (Ball and Christensen 2009; Ball et al. 2010; Christensen and Schunn 2009).
Ball and Christensen (2009, 2019) have proposed that cases of analogical reasoning and mental simulation in team design reflect strategies that are under metacognitive control, being triggered by emerging uncertainty in the team regarding how to progress toward a design solution. When deployed, these strategies support ongoing design progress while also reducing uncertainty within the team to baseline levels. Outside of the design domain, Chan et al. (2012) have corroborated the existence of a close temporal coupling between uncertainty and the use of analogical reasoning in the context of team-based scientific problem solving. In their research, Chan et al. revealed how expressed uncertainty in team dialogue (similarly indexed through the presence of hedge words) tended to increase prior to episodes of analogical reasoning, subsequently staying at a high level during the analogising process, then returning to a baseline level just after the analogising had terminated. Heightened uncertainty in design teams has also been closely associated with strategic episodes of “problem–solution coevolution” (Wiltschnig et al. 2013). This occurs when designers simultaneously refine both their understanding of the design problem and their ideas relating to potential solutions, such that problem understanding informs solution development, as well as vice versa (Dorst and Cross 2001). Wiltschnig et al. (2013) demonstrated that this kind of problem–solution coevolution was more likely to arise during heightened uncertainty in the team-based dialogue and was also linked to increased analogical reasoning.
This research on collaborative metareasoning in team design highlights the critical role that dialogue plays in team interaction to enable teams to make effective control decisions and maintain progress with their design activity. Importantly, such overt dialogue (such as comments that express uncertainty) is not only discernible to interlocutors within collaborating teams but is also visible to researchers interested in investigating the monitoring and control processes that underpin both successful and unsuccessful team activity. In such situations, not only do metacognitive monitoring processes need to be alert to the shifting uncertainty that is salient in the communication of team members, but metacognitive control processes also need to affect dynamic strategy selection and strategy change in a highly coordinated manner if there is to be any hope of problem-solving success. We reiterate, however, that, to date, there has been little direct investigation of such “collaborative metareasoning” in team-based activity. We contend that this lack of research is, in large part, a consequence of the methodological challenges that arise when investigating the dynamic interplay between multiple interlocutors and their associated metareasoning processes.

6. Toward a Framework of Collaborative Metareasoning

Understanding the nature of cooperative joint activities has for a long time presented an ongoing challenge for cognitive science. Almost always, the default research approach is to analyse thought and behaviour at the level of the individual, and this is no different in the field of metareasoning. Yet, in team contexts, successful collaboration depends upon joint action and shared understanding between interlocuters. As we noted in our discussion of the metareasoning of individuals, one of the primary cues to judgments of certainty and uncertainty in relation to task-based progress is that of processing fluency; that is, the ease of generating a response. In collaborative contexts, such processing fluency has both an individual dimension, arising as part of each team member’s cognitive processing, and an interpersonal dimension, arising at the level of joint processing. Interpersonal fluency is likely to rely on a variety of cues, some of which will be unique to the interpersonal context. Examples of such cues include background knowledge, familiarity with other team members, perceptions of trust and competence as well as perceived confidence in other team members. Successful team reasoning therefore requires team members to monitor both the individual and joint dimensions of fluency at various points in time, with such dynamic monitoring having the potential to have an impact on procedural decisions to continue with a current processing approach, to switch to a different strategy or potentially to terminate processing if a good solution is not forthcoming.
The importance of team members being able successfully to monitor interpersonal fluency brings to the fore a key issue for research on collaborative metareasoning, which is to investigate and understand the role of alignment and misalignment at this interpersonal level. For example, people tend to find interactions easier and more enjoyable when they like and know other team members, regardless of their individual or joint success with the problem at hand (Richardson et al. 2019). Alignment in interpersonal fluency may play a crucial role in team perceptions of progress on a given task, and hence is likely to be influential in procedural decisions to carry on without making strategy changes. What is generally missing from existing theorising, however, is a model that refers to the interrelationships between individuals in navigating interpersonal fluency from the perspective of the whole team rather than just from the perspective of the individual members who make up the team. As noted, such a model would need to capture the way in which interpersonal fluency is monitored by team members as well as the impact of ongoing fluency dynamics on team members’ control processes in determining strategic decision making.
We suggest that moving from metareasoning considerations at the individual level to considerations at the social and interpersonal level, whether reasoning is serving competitive goals (as in persuasion or deception) or cooperative goals (as in collaborative reasoning and problem solving), will require very careful and highly systematic augmentation of current models of metacognition, such as Ackerman and Thompson’s (2017, 2018) metareasoning framework. Even a cursory assessment of how metareasoning might take place in team-based reasoning indicates the complexity of the theoretical issues that immediately become foregrounded. As we have noted, what is particularly fascinating, yet poorly understood, is the way in which individual members of teams invoke and coordinate both individual-level and team-level metacognitive monitoring and control processes to drive forward their joint action and strategic decision making. The metacognitive monitoring and control processes that team members need to engage will presumably have to be attuned to the subtleties and complexities of both personal and interpersonal uncertainty, with the latter most likely being made manifest in the verbal and nonverbal communication arising during team interaction.
As mentioned in our introduction, an important tripartite distinction can be drawn in relation to collaborative metareasoning between “self-monitoring” (i.e., an individual’s perception of their own performance), “other monitoring” (i.e., an individual’s perception of the performance of others) and “joint monitoring” (i.e., the unified perception of collective performance), as articulated by Richardson et al. (2024; see also Pickering and Garrod 2021). When considering this tripartite distinction specifically in relation to fluctuating states of certainty in reasoning contexts, it can be seen how the distinction gives rise to concerns not only with an individual’s own feeling of confidence in a solution to a reasoning task but also that individual’s awareness of a collaborator’s feeling of confidence in this solution, as well as the collective or agreed-upon confidence in the solution. All these elements of ongoing monitoring have the potential to impact decisions about task progress in important ways and are critical in informing a shared understanding of the reasoning task at hand. As we also noted earlier, a parallel tripartite distinction can be drawn that relates to control processes, which can be captured by the notions of “self-focused control” (i.e., an individual’s decisions about how to progress or terminate their own reasoning), “other-focused control” (i.e., an individual’s decisions about how to control the performance of others) and “joint control” (i.e., the unified control of decisions regarding how to advance or terminate collective performance).
Table 1 presents a more detailed characterisation of key monitoring and control processes that arise across the three proposed levels of metacognition, as already discussed above and further elaborated upon in the next section. The table also summarises examples of the kinds of cues that are very likely to be detected by monitoring processes, as well as examples of the kinds of outcomes that will result from the deployment of control processes. We contend that it is possible to track many of the key monitoring and control processes associated with our tripartite distinction through an analysis of ongoing dialogue (Gonzales et al. 2010; Richardson and Nash 2022). Through such analyses, it is possible, for example, to identify periods of conflict (Tausczik and Pennebaker 2013) or uncertainty within team reasoning, problem solving and decision making (e.g., as reflected in the use of tentative language or hedge words), as well as strategic changes that arise as a result of the detection of conflict or uncertainty. We therefore propose that research on collaborative metareasoning would benefit from the development of theoretical models that capture how uncertainty, as well as both alignment and misalignment, can trigger self- and group reflection and downstream strategic decision making. Importantly, we suggest that these periods of uncertainty can be tracked dynamically by measuring social signals, specifically the linguistic, paralinguistic and proxemic features that occur during dialogue between interlocuters.

7. The Role of Language in Collaborative Metareasoning

In dialogue, speakers process a great deal of information, they take and give the floor to each other as well as plan and adjust their contributions on the fly. Despite the level of cognitive effort and control that it requires, dialogue is the easiest way speakers possess to come to similar conceptualisations of the world. For example, Pickering and Garrod (2021) suggest that it is the process of alignment, a largely automatic and unconscious process whereby speakers use language in the same way to reach mutual understanding, that underpins many successful joint activities.
Research on alignment indicates that, over time, in successful communication, people tend to think in the same way (representing the world in the same way) and use similar terms of expression. For example, in problem solving, speakers align with each other by mutually controlling the flow of dialogue and by constantly monitoring their own and others’ ways of representing information. Furthermore, alignment increases over time as the dialogue progresses. In a verbal description task, reported by Clark and Wilkes-Gibbs (1986), participants in the role of “director” started describing a figure to a “matcher” using long and detailed sentences (e.g., “the next one looks like a person who’s ice skating, except they’re sticking two arms out in front”). In subsequent turns, descriptions became simpler (e.g., “the fourth one is the person ice skating, with two arms”), until the interlocutors converged on a common description (“the ice skater”) and used it effectively until the end of the task. What this means is that speakers gradually converge on a similar conceptualisation of the task (termed “situation model alignment”; Garrod and Anderson 1987) by aligning at a linguistic level (termed “linguistic alignment”).
Alignment at the level of joint monitoring and control can be understood in terms of the important theoretical concept of “intersubjectivity”, which refers to having a shared understanding of an object (e.g., Mori and Hayashi 2006) or an interlocuter (Gillespie and Richardson 2011). Like alignment, intersubjectivity is frequently thought of as being an implicit and automatic behavioural orientation toward others (Coelho and Figueiredo 2003; Merleau-Ponty [1945] 1962). For example, frequent, brief, paralinguistic communications such as “uh-huh” or head-nodding that punctuate a conversation serve a purely social function. They are not designed to contribute additional meaning but are instead used to provide ongoing feedback about comprehension (Schegloff 1982), thereby signalling implicit understanding to other team members. A similar role is played by the “third position repair” described by Schegloff (1992), whereby a listener will often respond to a speaker with an utterance which, instead of contributing anything new, simply displays understanding of what has been said.
Although research typically suggests that intersubjectivity and alignment are key to successful collaboration, Richardson et al. (2007) additionally propose that misalignment (or what they refer to as “disalignment”) is also likely to trigger a solution to a team-based problem. Such misalignment is similar to situations in which recognition of uncertainty can trigger a strategy change that moves the task towards successful resolution (Ball and Christensen 2009, 2019). For example, misalignment can lead to teams engaging in conflict-resolution attempts that can promote team agreement on strategy change. The benefits for team-based reasoning that can derive from misalignment have been demonstrated in a study reported by Paletz et al. (2017), in which they analysed the temporal relationships between brief interpersonal disagreements (or “microconflicts”) and the subsequent expression of uncertainty in conversations arising in successful and unsuccessful engineering product design teams. Paletz et al. discovered that microconflicts were followed by a relative decrease in uncertainty in successful design teams, whereas uncertainty increased after microconflicts in unsuccessful design teams. Paletz et al. (2017) interpret these findings as suggesting that the interaction between conflict and uncertainty may be critical in determining the success of design teams.
Building on the idea of the importance of misalignment and uncertainty for task progress, we further note here that Bjørndahl et al. (2015) suggest that agreement between interlocuters is not enough when working toward a solution to a problem. They identify three key interactional styles, one of which they refer to as an “integrative style”, which is characterised by self–other repair via, for example, clarification requests, disagreements, questions and explicit negotiation of ideas and proposals. During a collaborative LEGO modelling task, Bjørndahl et al. (2015) found that this integrative style, which promotes explicit miscommunication, generated more innovative models as compared to an inclusive praise-based style or an instructional self-repair style. Crucially, speakers do not reach alignment in isolation, but through interaction, by manipulating each other’s contributions. Furthermore, speakers are able to track this alignment by metarepresenting it (Gandolfi et al. 2023). Speakers mutually control the flow of dialogue and constantly monitor their own and their interlocutors’ way of representing information.
What is missing from the existing literature on metareasoning is a framework that captures metareasoning in teams, and that offers a means of dynamically tracking key processes, such as fluency and uncertainty, between team members. Our framework (see Table 1) proposes tripartite distinctions relating to both metacognitive monitoring and control that can be meaningfully applied to understand collaborative metareasoning at the level of self-, other and joint processing, while also focusing on how collaborative metareasoning unfolds via dialogue between team members. We argue that the perception of misalignment triggers monitoring, which leads to negotiation between perspectives. During collaborative monitoring, for example, alignment can be seen as a process that requires the combination of metacognition (i.e., with respect to oneself) and social cognition (i.e., with respect to interlocutors, Gandolfi et al. 2023).
Team members not only predict and monitor the utterances of interlocutors, but they also predict and monitor their own utterances, as well as acting on joint representations by comparing their expectation of what their interlocutors might say, with what they actually say. When discrepancies arise between what is predicted and what arises, this typically leads to subsequent reformulations, expansions or clarifications that help drive the interaction forward. Specifically, when speakers metarepresent the failure of alignment they will reformulate their plans and correct their contributions to keep the dialogue on track.
Control is especially necessary for the attainment of joint reasoning activities and team decision making. We also argue that control plays out via the interaction between interlocuters, who do not reach alignment in isolation, but do so by manipulating each other’s contributions in a collaborative fashion. By continuously monitoring and comparing self and others’ contributions, and specifically by metarepresenting whether they believe they and their interlocutor are aligned or not, team members make decisions about whether to continue a particular course of action, terminate the task or switch strategies to find an alternative way forward.

8. Conclusions and Future Directions

A key focus in this paper has been on the importance of analysing the use of language when examining collaborative metareasoning. For example, diminishing confidence, increasing interpersonal misalignment and emerging conflict—as revealed through language change during team reasoning—appear to be linked to metacognitive control decisions to adopt new approaches or to terminate processing (cf. Paletz et al. 2017). These dynamic and objective language-based measures of unfolding metareasoning processes in teams can also be compared with more traditional self-report measures, discussed above, that relate to task performance, solution confidence and interpersonal dynamics. In addition, sophisticated approaches to language analysis in teams (Bjørndahl et al. 2015) can be invaluable for identifying important aspects of behavioural sequencing, as well as for uncovering clusters of behaviours that capture the team-based monitoring of ongoing processing fluency and disfluency.
We suggest that future research examining language use in teams as an index of metacognitive monitoring and control could explore the way in which the different interactional styles of team members modulate metareasoning processes. In this respect, we note that Gonzales et al. (2010) have shown that, in groups with a strong hierarchy, an authoritarian leadership style is often characterised by the frequent use of self over collective pronouns, and that this display of pronouns can be detrimental for team cohesion. We contend that the impact of different leadership styles is also likely to manifest in unique ways in the dialogue that relates to joint metacognitive monitoring and control during team reasoning. These modulating effects of leadership styles remain to be examined empirically, yet are clearly important to explore in high-stakes, real-world, decision-making contexts where effective team reasoning is critical.
We also propose that our collaborative metareasoning framework could be extended to take into account multimodal measures of team interaction. For example, the fine-grained analysis of real-time social signals could provide further valuable insights into fluctuating levels of confidence and cohesion in teams in relation to ongoing monitoring and control processes (cf. Casakin et al. 2015). Indeed, although we have emphasised the way in which an analysis of language can provide valuable information about ongoing monitoring and control processes in team-based reasoning, this is not to dismiss the insights into collaborative metareasoning that might also be gained from other forms of behavioural analysis. In this respect, we acknowledge the existence of substantial bodies of behavioural research relating to the complex interplay that arises between individuals engaged in the achievement of common goals, including studies concerning the nature of joint action (e.g., Sebanz and Knoblich 2009), collective intelligence (e.g., Krause et al. 2010), synchrony (e.g., Lakens and Stel 2011; Miles et al. 2011), perspective taking (Gillespie and Richardson 2011) and distributed cognition (e.g., Hutchins 1995). A detailed review of this literature might well pinpoint important findings that could be informative about collaborative metareasoning. What seems more likely, however, is that research in these areas—apart from studies of synchrony and perspective taking—has typically been more concerned with the nature of people’s task-oriented, object-level processing (e.g., coordinated action or team decision making) than with people’s metareasoning processes. Nevertheless, until these other research areas are carefully examined through a metareasoning lens, it remains possible that important knowledge is being missed.
What also seems clear is the need for our proposed collaborative metareasoning framework to be evaluated and extended through extensive empirical research. Indeed, the tripartite distinction that we articulate relating to self-, other and joint metareasoning only presents a starting point for further conceptual development regarding the nature of these metacognitive processes and their interconnections. In this respect, we concede that the literature on collaborative metareasoning is simply not sufficiently well advanced to enable much in the way of detail and precision concerning the highly complex interrelationships that are certain to exist between individual metareasoning and collaborative metareasoning processes. More positively, it does seem that major conceptual advancements are increasingly likely as interest in metareasoning continues to grow (e.g., see De Neys 2023).
What is also encouraging for the ongoing development of the field of collaborative metareasoning is the potential for a multimethod investigative approach to be able to elicit highly rich and informative data regarding the nature and interplay between individual and collaborative metareasoning processes. Such a multimethod approach can, for example, lend itself to the identification of key subjective metrics (e.g., self-report measures that index trust and confidence in oneself and in others) as well as objective metrics (e.g., language-based markers of uncertainty and eye-gaze measures to pinpoint challenges arising in joint cognition). Empirical research additionally needs to track processes of alignment and misalignment via dialogue (Richardson and Nash 2022) to identify periods of conflict or uncertainty within reasoning (e.g., as reflected in the use of tentative language, hedge words or pronouns), and compare these indices with measurements of traditional metareasoning concepts (e.g., fluency, intermediate confidence and strategy change).
We contend that the conceptual progress that can derive from our proposals for a collaborative metareasoning framework can additionally provide valuable opportunities to develop interventions that might support enhanced reasoning, problem solving and decision making in real-world contexts, including those that we identified throughout our paper (e.g., ones relating to design, innovation, defence, security, surveillance and emergency response). Interventions targeted at supporting metareasoning might assist, for example, with the identification of periods of uncertainty, points of impasse or instances of misunderstanding, which can then prompt teams to reflect on more effective strategic choices. Such metareasoning support tools would represent an original and radical approach to facilitating successful collaboration in applied contexts, augmenting natural reasoning and metareasoning processes in useful ways that teams could capitalise upon.
We have attempted here to advance the metareasoning literature by commencing the development of an understanding of the metareasoning processes that arise in team-based contexts. We propose a collaborative metareasoning framework that emphasises the importance of a tripartite distinction between self-, other and joint metareasoning for capturing the different levels of monitoring and control that arise during task-based processing. We also suggest that the interplay that occurs between different levels of metacognitive monitoring and control is critical for a team’s procedural decision making, such as collective decisions to accept an initial solution, to switch strategy or to give up as well as to a team’s potential achievement of a successful task outcome. We additionally argue for the utility of analysing language as a means of tracking the fluctuating states of uncertainty and misalignment that arise, with such states seemingly having the capacity to act as metacognitive triggers for strategy change. In conclusion, our framework begins to address a significant gap in the literature surrounding the monitoring and control processes that play out during collaborative reasoning, whereby complex, interpersonal interactions occur as people work together to achieve shared goals.

Author Contributions

Conceptualization, B.H.R. and L.J.B.; methodology, B.H.R. and L.J.B.; writing—original draft preparation, B.H.R. and L.J.B.; writing—review and editing, B.H.R. and L.J.B.; project administration, B.H.R. and L.J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ackerman, Rakefet. 2014. The diminishing criterion model for metacognitive regulation of time investment. Journal of Experimental Psychology: General 143: 1349–68. [Google Scholar] [CrossRef]
  2. Ackerman, Rakefet. 2023. Bird’s-eye view of cue integration: Exposing instructional and task design factors which bias problem solvers. Educational Psychology Review 35: 55. [Google Scholar] [CrossRef]
  3. Ackerman, Rakefet, and Hagar Zalmanov. 2012. The persistence of the fluency–confidence association in problem solving. Psychonomic Bulletin & Review 19: 1187–92. [Google Scholar]
  4. Ackerman, Rakefet, and Morris Goldsmith. 2008. Control over grain size in memory reporting: With and without satisficing knowledge. Journal of Experimental Psychology: Learning, Memory, & Cognition 34: 1224–45. [Google Scholar]
  5. Ackerman, Rakefet, and Valerie A. Thompson. 2017. Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sciences 21: 607–17. [Google Scholar] [CrossRef] [PubMed]
  6. Ackerman, Rakefet, and Valerie A. Thompson. 2018. Meta-reasoning: Shedding meta-cognitive light on reasoning research. In The Routledge International Handbook of Thinking and Reasoning. Edited by Linden J. Ball and Valerie A. Thompson. Abingdon: Routledge, pp. 164–82. [Google Scholar]
  7. Ackerman, Rakefet, and Yael Beller. 2017. Shared and distinct cue utilization for metacognitive judgements during reasoning and memorisation. Thinking & Reasoning 23: 376–408. [Google Scholar]
  8. Ackerman, Rakefet, Elad Yom-Tov, and Ilan Torgovitsky. 2020. Using confidence and consensuality to predict time invested in problem solving and in real-life web searching. Cognition 199: 104248. [Google Scholar] [CrossRef] [PubMed]
  9. Alter, Adam L., and Daniel M. Oppenheimer. 2009. Uniting the tribes of fluency to form a metacognitive nation. Personality & Social Psychology Review 13: 219–35. [Google Scholar]
  10. Baars, Martine, Sandra Visser, Tamara Van Gog, Anique de Bruin, and Fred Paas. 2013. Completion of partially worked-out examples as a generation strategy for improving monitoring accuracy. Contemporary Educational Psychology 38: 395–406. [Google Scholar] [CrossRef]
  11. Bago, Bence, and Wim De Neys. 2017. Fast logic? Examining the time course assumption of dual process theory. Cognition 158: 90–109. [Google Scholar] [CrossRef]
  12. Ball, Linden J. 2013. Eye-tracking and reasoning: What your eyes tell about your inferences. In New Approaches in Reasoning Research. Edited by Wim De Neys and Magda Osman. Hove: Psychology Press, pp. 51–69. [Google Scholar]
  13. Ball, Linden J., and Bo T. Christensen. 2009. Analogical reasoning and mental simulation in design: Two strategies linked to uncertainty resolution. Design Studies 30: 169–86. [Google Scholar] [CrossRef]
  14. Ball, Linden J., and Bo T. Christensen. 2019. Advancing an understanding of design cognition and design metacognition: Progress and prospects. Design Studies 65: 35–59. [Google Scholar] [CrossRef]
  15. Ball, Linden J., Balder Onarheim, and Bo T. Christensen. 2010. Design requirements, epistemic uncertainty and solution development strategies in software design. Design Studies 31: 567–89. [Google Scholar] [CrossRef]
  16. Ball, Linden J., John E. Marsh, Damien Litchfield, Rebecca L. Cook, and Natalie Booth. 2015. When distraction helps: Evidence that concurrent articulation and irrelevant speech can facilitate insight problem solving. Thinking & Reasoning 21: 76–96. [Google Scholar]
  17. Baranski, Joseph V., and William M. Petrusic. 2001. Testing architectures of the decision–confidence relation. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale 55: 195–206. [Google Scholar] [CrossRef]
  18. Bjørndahl, Johanne S., Riccardo Fusaroli, Svend Østergaard, and Kristian Tylén. 2015. Agreeing is not enough: The constructive role of miscommunication. Interaction Studies 16: 495–525. [Google Scholar] [CrossRef]
  19. Bonder, Taly, and Daniel Gopher. 2019. The effect of confidence rating on a primary visual task. Frontiers in Psychology 10: 2674. [Google Scholar] [CrossRef]
  20. Bowden, Edward M., Mark Jung-Beeman, Jessica Fleck, and John Kounios. 2005. New approaches to demystifying insight. Trends in Cognitive Sciences 9: 322–28. [Google Scholar] [CrossRef]
  21. Campitelli, Guillermo, and Paul Gerrans. 2014. Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach. Memory & Cognition 42: 434–47. [Google Scholar]
  22. Casakin, Hernan, Linden J. Ball, Bo T. Christensen, and Petra Badke-Schaub. 2015. How do analogizing and mental simulation influence team dynamics in innovative product design? AI EDAM 29: 173–83. [Google Scholar] [CrossRef]
  23. Chan, Joel, Susannah B. Paletz, and Christian D. Schunn. 2012. Analogy as a strategy for supporting complex problem solving under uncertainty. Memory & Cognition 40: 1352–65. [Google Scholar]
  24. Christensen, Bo T., and Christian D. Schunn. 2009. The role and impact of mental simulation in design. Applied Cognitive Psychology 23: 327–44. [Google Scholar] [CrossRef]
  25. Clark, Herbert H., and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition 22: 1–39. [Google Scholar] [CrossRef]
  26. Coelho, Nelson E., Jr., and Luis C. Figueiredo. 2003. Patterns of intersubjectivity in the constitution of subjectivity: Dimensions of otherness. Culture & Psychology 9: 193–208. [Google Scholar]
  27. Cromley, Jennifer G., and Andrea J. Kunze. 2020. Metacognition in education: Translational research. Translational Issues in Psychological Science 6: 15–20. [Google Scholar] [CrossRef]
  28. De Neys, Wim. 2023. Advancing theorizing about fast-and-slow thinking. Behavioral and Brain Sciences 46: e111. [Google Scholar] [CrossRef] [PubMed]
  29. Dorst, Kees, and Nigel Cross. 2001. Creativity in the design process: Co-evolution of problem–solution. Design Studies 22: 425–37. [Google Scholar] [CrossRef]
  30. Double, Kit S., and Damian P. Birney. 2017. Are you sure about that? Eliciting confidence ratings may influence performance on Raven’s progressive matrices. Thinking & Reasoning 23: 190–206. [Google Scholar]
  31. Double, Kit S., and Damian P. Birney. 2018. Reactivity to confidence ratings in older individuals performing the Latin square task. Metacognition & Learning 13: 309–26. [Google Scholar]
  32. Double, Kit S., and Damian P. Birney. 2019a. Do confidence ratings prime confidence? Psychonomic Bulletin & Review 26: 1035–42. [Google Scholar]
  33. Double, Kit S., and Damian P. Birney. 2019b. Reactivity to measures of metacognition. Frontiers in Psychology 10: 2755. [Google Scholar] [CrossRef]
  34. Double, Kit S., Damian P. Birney, and Sarah A. Walker. 2018. A meta-analysis and systematic review of reactivity to judgements of learning. Memory 26: 741–50. [Google Scholar] [CrossRef] [PubMed]
  35. Ericsson, K. Anders, and Herbert A. Simon. 1980. Verbal reports as data. Psychological Review 87: 215–51. [Google Scholar] [CrossRef]
  36. Evans, Jonathan St. B. T. 2018. Dual-process theories. In The Routledge International Handbook of Thinking and Reasoning. Edited by Linden J. Ball and Valerie A. Thompson. Abingdon: Routledge, pp. 151–66. [Google Scholar]
  37. Evans, Jonathan St. B. T., and Keith E. Stanovich. 2013a. Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science 8: 223–41. [Google Scholar] [CrossRef] [PubMed]
  38. Evans, Jonathan St. B. T., and Keith E. Stanovich. 2013b. Theory and metatheory in the study of dual processing: Reply to comments. Perspectives on Psychological Science 8: 263–71. [Google Scholar] [CrossRef] [PubMed]
  39. Figner, Bernd, Ryan O. Murphy, and Paul Siegel. 2019. Measuring electrodermal activity and its applications in judgment and decision-making research. In A Handbook of Process Tracing Methods, 2nd ed. Edited by Michael Schulte-Mecklenbeck, Anton Kuehberger and Joseph G. Johnson. Abingdon: Routledge, pp. 163–84. [Google Scholar]
  40. Forte, Giuseppe, Francesca Favieri, and Maria Casagrande. 2019. Heart rate variability and cognitive function: A systematic review. Frontiers in Neuroscience 13: 710. [Google Scholar] [CrossRef] [PubMed]
  41. Frederick, Shane. 2005. Cognitive reflection and decision making. Journal of Economic Perspectives 19: 25–42. [Google Scholar] [CrossRef]
  42. Gandolfi, Greta, Martin J. Pickering, and Simon Garrod. 2023. Mechanisms of alignment: Shared control, social cognition and metacognition. Philosophical Transactions of the Royal Society B 378: 20210362. [Google Scholar] [CrossRef]
  43. Garrod, Simon, and Anthony Anderson. 1987. Saying what you mean in dialogue: A study in conceptual and semantic co-ordination. Cognition 27: 181–218. [Google Scholar] [CrossRef]
  44. Gillespie, Alex, and Beth Richardson. 2011. Exchanging social positions: Enhancing perspective taking within a cooperative problem-solving task. European Journal of Social Psychology 41: 608–16. [Google Scholar] [CrossRef]
  45. Godfroid, Aline, and Lee Anne Spino. 2015. Reconceptualizing reactivity of think-alouds and eye tracking: Absence of evidence is not evidence of absence. Language Learning 65: 896–928. [Google Scholar] [CrossRef]
  46. Gonzales, Amy L., Jeffrey T. Hancock, and James W. Pennebaker. 2010. Language style matching as a predictor of social dynamics in small groups. Communication Research 37: 3–19. [Google Scholar] [CrossRef]
  47. Hacker, Douglas J., John Dunlosky, and Arthur C. Graesser, eds. 2009. Handbook of Metacognition in Education. Abingdon: Routledge. [Google Scholar]
  48. Hamilton, Katherine, Vincent Mancuso, Susan Mohammed, Rachel Tesler, and Michael McNeese. 2017. Skilled and unaware: The interactive effects of team cognition, team metacognition, and task confidence on team performance. Journal of Cognitive Engineering and Decision Making 11: 382–95. [Google Scholar] [CrossRef]
  49. Hedne, Mikael R., Elisabeth Norman, and Janet Metcalfe. 2016. Intuitive feelings of warmth and confidence in insight and non-insight problem solving of magic tricks. Frontiers in Psychology 7: 1314. [Google Scholar] [CrossRef] [PubMed]
  50. Hutchins, Edwin. 1995. Cognition in the Wild. Cambridge, MA: MIT Press. [Google Scholar]
  51. Kahneman, Daniel, and Shane Frederick. 2002. Representativeness revisited: Attribute substitution in intuitive judgment. In Heuristics and Biases: The Psychology of Intuitive Judgment. Edited by Thomas Gilovich, Dale Griffin and Daniel Kahneman. Cambridge, MA: Cambridge University Press, pp. 49–81. [Google Scholar]
  52. Kaufmann, Esther. 2022. Lens model studies: Revealing teachers’ judgements for teacher education. Journal of Education for Teaching 49: 236–51. [Google Scholar] [CrossRef]
  53. Kershaw, Trina C., and Stellan Ohlsson. 2004. Multiple causes of difficulty in insight: The case of the nine-dot problem. Journal of Experimental Psychology: Learning, Memory, and Cognition 30: 3–13. [Google Scholar] [CrossRef] [PubMed]
  54. Kizilirmak, Jasmin M., Violetta Serger, Judith Kehl, Michael Öllinger, Kristian Folta-Schoofs, and Alan Richardson-Klavehn. 2018. Feelings-of-warmth increase more abruptly for verbal riddles solved with in contrast to without Aha! experience. Frontiers in Psychology 9: 1404. [Google Scholar] [CrossRef] [PubMed]
  55. Krause, Jens, Graeme D. Ruxton, and Stefan Krause. 2010. Swarm intelligence in animals and humans. Trends in Ecology & Evolution 25: 28–34. [Google Scholar]
  56. Kruglanski, Arie W., and Gerd Gigerenzer. 2011. Intuitive and deliberate judgments are based on common principles. Psychological Review 118: 97–109. [Google Scholar] [CrossRef]
  57. Lakens, Daniël, and Mariëlle Stel. 2011. If they move in sync, they must feel in sync: Movement synchrony leads to attributions of rapport and entitativity. Social Cognition 29: 1–14. [Google Scholar] [CrossRef]
  58. Laukkonen, Ruben E., and Jason M. Tangen. 2018. How to detect insight moments in problem solving experiments. Frontiers in Psychology 9: 282. [Google Scholar] [CrossRef]
  59. Laukkonen, Ruben E., Daniel J. Ingledew, Hilary J. Grimmer, Jonathan W. Schooler, and Jason M. Tangen. 2021. Getting a grip on insight: Real-time and embodied Aha experiences predict correct solutions. Cognition and Emotion 35: 918–35. [Google Scholar] [CrossRef] [PubMed]
  60. Law, Marvin K., Lazar Stankov, and Sabina Kleitman. 2022. I choose to opt-out of answering: Individual differences in giving up behaviour on cognitive tests. Journal of Intelligence 10: 86. [Google Scholar] [CrossRef] [PubMed]
  61. Lei, Wei, Jing Chen, Chunliang Yang, Yiqun Guo, Pan Feng, Tingyong Feng, and Hong Li. 2020. Metacognition-related regions modulate the reactivity effect of confidence ratings on perceptual decision-making. Neuropsychologia 144: 107502. [Google Scholar] [CrossRef] [PubMed]
  62. Mathôt, Sebastiaan, and Ana Vilotijević. 2022. Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis. Behavior Research Methods 55: 3055–77. [Google Scholar] [CrossRef] [PubMed]
  63. Mercier, Hugo. 2016. The argumentative theory: Predictions and empirical evidence. Trends in Cognitive Sciences 20: 689–700. [Google Scholar] [CrossRef] [PubMed]
  64. Mercier, Hugo, and Dan Sperber. 2011. Why do humans reason? Arguments for an argumentative theory. Behavioral & Brain Sciences 34: 57–74. [Google Scholar]
  65. Merleau-Ponty, Maurice. 1962. Phenomenology of Perception. Translated by Colin Smith. London: Routledge & Kegan Paul. First published 1945. [Google Scholar]
  66. Metcalfe, Janet. 1986. Premonitions of insight predict impending error. Journal of Experimental Psychology: Learning, Memory, and Cognition 12: 623–34. [Google Scholar] [CrossRef]
  67. Metcalfe, Janet, and David Wiebe. 1987. Intuition in insight and noninsight problem solving. Memory & Cognition 15: 238–46. [Google Scholar]
  68. Miles, Lynden K., Joanne Lumsden, Michael J. Richardson, and C. Neil Macrae. 2011. Do birds of a feather move together? Group membership and behavioral synchrony. Experimental Brain Research 211: 495–503. [Google Scholar] [CrossRef]
  69. Mori, Junko, and Makoto Hayashi. 2006. The achievement of intersubjectivity through embodied completions: A study of interactions between first and second language speakers. Applied Linguistics 27: 195–219. [Google Scholar] [CrossRef]
  70. Nelson, Thomas O., and Louis Narens. 1990. Metamemory: A theoretical framework and new findings. In The Psychology of Learning and Motivation: Advances in Research and Theory. Edited by Gordon Bower. Cambridge, MA: Academic Press, pp. 125–73. [Google Scholar]
  71. Paletz, Susannah B., Joel Chan, and Christian D. Schunn. 2017. The dynamics of micro-conflicts and uncertainty in successful and unsuccessful design teams. Design Studies 50: 39–69. [Google Scholar] [CrossRef]
  72. Paprocki, Rafal, and Artem Lenskiy. 2017. What does eye-blink rate variability dynamics tell us about cognitive performance? Frontiers in Human Neuroscience 11: 620. [Google Scholar] [CrossRef] [PubMed]
  73. Pennycook, Gordon, James A. Cheyne, Derek J. Koehler, and Jonathan A. Fugelsang. 2016. Is the cognitive reflection test a measure of both reflection and intuition? Behavior Research Methods 48: 341–48. [Google Scholar] [CrossRef] [PubMed]
  74. Perry, John, David Lundie, and Gill Golder. 2019. Metacognition in schools: What does the literature suggest about the effectiveness of teaching metacognition in schools? Educational Review 71: 483–500. [Google Scholar] [CrossRef]
  75. Pervin, Nargis, Tuan Q. Phan, Anindya Datta, Hideaki Takeda, and Fujio Toriumi. 2015. Hashtag popularity on twitter: Analyzing co-occurrence of multiple hashtags. In Proceedings of Social Computing and Social Media: 7th International Conference, SCSM 2015. Berlin/Heidelberg: Springer International Publishing, pp. 169–82. [Google Scholar]
  76. Petrusic, William M., and Joseph V. Baranski. 2003. Judging confidence influences decision processing in comparative judgments. Psychonomic Bulletin & Review 10: 177–83. [Google Scholar]
  77. Pickering, Martin J., and Simon Garrod. 2021. Understanding Dialogue: Language Use and Social Interaction. Cambridge, MA: Cambridge University Press. [Google Scholar]
  78. Quayle, Jeremy D., and Linden J. Ball. 2000. Working memory, metacognitive uncertainty and belief bias in syllogistic reasoning. Quarterly Journal of Experimental Psychology 53: 1202–23. [Google Scholar] [CrossRef] [PubMed]
  79. Richardson, Beth H., and Robert A. Nash. 2022. ‘Rapport myopia’ in investigative interviews: Evidence from linguistic and subjective indicators of rapport. Legal & Criminological Psychology 27: 32–47. [Google Scholar]
  80. Richardson, Beth H., Kathleen C. McCulloch, Paul J. Taylor, and Helen J. Wall. 2019. The cooperation link: Power and context moderate verbal mimicry. Journal of Experimental Psychology: Applied 25: 62–76. [Google Scholar] [CrossRef]
  81. Richardson, Beth H., Linden J. Ball, Bo T. Christensen, and John E. Marsh. 2024. Collaborative meta-reasoning in creative contexts: Advancing an understanding of collaborative monitoring and control in creative teams. In The Routledge International Handbook of Creative Cognition. Edited by Linden J. Ball and Frédéric Vallée-Tourangeau. Abingdon: Routledge, pp. 709–27. [Google Scholar]
  82. Richardson, Daniel C., Rick Dale, and Natasha Z. Kirkham. 2007. The art of conversation is coordination. Psychological Science 18: 407–13. [Google Scholar] [CrossRef]
  83. Roberts, Linds W. 2017. Research in the real world: Improving adult learners web search and evaluation skills through motivational design and problem-based learning. College & Research Libraries 78: 527–51. [Google Scholar]
  84. Salvi, Carola, Emanuela Bricolo, John Kounios, Edward Bowden, and Mark Beeman. 2016. Insight solutions are correct more often than analytic solutions. Thinking & Reasoning 22: 443–60. [Google Scholar]
  85. Saunders, John. 2022. Manchester Arena Inquiry, Volume 2: Emergency Response. London: His Majesty’s Stationery Office. [Google Scholar]
  86. Schegloff, Emanuel A. 1982. Discourse as an interactional achievement: Some uses of ‘uh huh’ and other things that come between sentences. In Analyzing Discourse: Text and Talk. Edited by Deborah Tannen. Washington, DC: Georgetown University Press, pp. 71–93. [Google Scholar]
  87. Schegloff, Emanuel A. 1992. Repair after next turn: The last structurally provided defense of intersubjectivity in conversation. American Journal of Sociology 97: 1295–345. [Google Scholar] [CrossRef]
  88. Schoenherr, Jordan R., Craig Leth-Steensen, and William M. Petrusic. 2010. Selective attention and subjective confidence calibration. Attention, Perception, and Psychophysics 72: 353–68. [Google Scholar] [CrossRef]
  89. Schooler, Jonathan W., and Joseph Melcher. 1995. The ineffability of insight. In The Creative Cognition Approach. Edited by Steven Smith, Thomas Ward and Ronald Finke. Cambridge, MA: MIT Press, pp. 97–133. [Google Scholar]
  90. Schooler, Jonathan W., Stellan Ohlsson, and Kevin Brooks. 1993. Thoughts beyond words: When language overshadows insight. Journal of Experimental Psychology: General 122: 166–83. [Google Scholar] [CrossRef]
  91. Sebanz, Natalie, and Guenther Knoblich. 2009. Prediction in joint action: What, when, and where. Topics in Cognitive Science 1: 353–67. [Google Scholar] [CrossRef] [PubMed]
  92. Siegler, Robert S. 2000. Unconscious insights. Current Directions in Psychological Science 9: 79–83. [Google Scholar] [CrossRef]
  93. Stanovich, Keith E. 2018. Miserliness in human cognition: The interaction of detection, override and mindware. Thinking & Reasoning 24: 423–44. [Google Scholar]
  94. Stupple, Edward J. N., Melanie Pitchford, Linden J. Ball, Thomas E. Hunt, and Richard Steel. 2017. Slower is not always better: Response-time evidence clarifies the limited role of miserly information processing in the Cognitive Reflection Test. PLoS ONE 12: e0186404. [Google Scholar] [CrossRef]
  95. Tausczik, Yla R., and James W. Pennebaker. 2013. Improving teamwork using real-time language feedback. Paper presented at the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, April 27–May 2; pp. 459–68. [Google Scholar]
  96. Thompson, Valerie A., Jamie A. P. Turner, and Gordon Pennycook. 2011. Intuition, reason, and metacognition. Cognitive Psychology 63: 107–40. [Google Scholar] [CrossRef]
  97. Thompson, Valerie A., Jamie A. P. Turner, Gordon Pennycook, Linden J. Ball, Hannah Brack, Yael Ophir, and Rakefet Ackerman. 2013. The role of answer fluency and perceptual fluency as metacognitive cues for initiating analytic thinking. Cognition 128: 237–51. [Google Scholar] [CrossRef]
  98. Thompson, Valerie, and Kinga Morsanyi. 2012. Analytic thinking: Do you feel like it? Mind & Society 11: 93–105. [Google Scholar]
  99. Topolinski, Sascha, and Rolf Reber. 2010. Immediate truth: Temporal contiguity between a cognitive problem and its solution determines experienced veracity of the solution. Cognition 114: 117–22. [Google Scholar] [CrossRef] [PubMed]
  100. Undorf, Monika, and Arndt Bröder. 2021. Metamemory for pictures of naturalistic scenes: Assessment of accuracy and cue utilization. Memory & Cognition 49: 1405–22. [Google Scholar]
  101. Undorf, Monika, and Edgar Erdfelder. 2015. The relatedness effect on judgments of learning: A closer look at the contribution of processing fluency. Memory & Cognition 43: 647–58. [Google Scholar]
  102. Undorf, Monika, Anke Söllner, and Arndt Bröder. 2018. Simultaneous utilization of multiple cues in judgments of learning. Memory & Cognition 46: 507–19. [Google Scholar]
  103. Unkelbach, Christian, and Rainer Greifeneder. 2013. A general model of fluency effects in judgment and decision making. In The Experience of Thinking. Edited by Christian Unkelbach and Rainer Greifeneder. Hove: Psychology Press, pp. 21–42. [Google Scholar]
  104. Webb, Margaret E., Daniel R. Little, and Simon J. Cropper. 2016. Insight is not in the problem: Investigating insight in problem solving across task types. Frontiers in Psychology 7: 1424. [Google Scholar] [CrossRef]
  105. Webb, M. E., Daniel R. Little, and Simon J. Cropper. 2018. Once more with feeling: Normative data for the aha experience in insight and noninsight problems. Behavior Research Methods 50: 2035–56. [Google Scholar] [CrossRef]
  106. Wiltschnig, Stefan, Bo T. Christensen, and Linden J. Ball. 2013. Collaborative problem–solution co-evolution in creative design. Design Studies 34: 515–42. [Google Scholar] [CrossRef]
  107. Zion, Michal, Idit Adler, and Zemira Mevarech. 2015. The effect of individual and social metacognitive support on students’ metacognitive performances in an online discussion. Journal of Educational Computing Research 52: 50–87. [Google Scholar] [CrossRef]
Figure 1. The approximate time course of reasoning and metareasoning processes as captured in Ackerman and Thompson’s (2018) metareasoning framework. Copyright (©2018) from “Meta-reasoning: Shedding meta-cognitive light on reasoning research” by Ackerman and Thompson (2018). [Permission to reproduce has been sought from Taylor and Francis Group, LLC, a division of Informa plc].
Figure 1. The approximate time course of reasoning and metareasoning processes as captured in Ackerman and Thompson’s (2018) metareasoning framework. Copyright (©2018) from “Meta-reasoning: Shedding meta-cognitive light on reasoning research” by Ackerman and Thompson (2018). [Permission to reproduce has been sought from Taylor and Francis Group, LLC, a division of Informa plc].
Jintelligence 12 00028 g001
Table 1. A detailed characterisation of indicative monitoring and control processes that arise across three proposed levels of metacognition. The table also summarises examples of the cues that are likely to be detected by monitoring processes, as well as examples of the outcomes of control processes.
Table 1. A detailed characterisation of indicative monitoring and control processes that arise across three proposed levels of metacognition. The table also summarises examples of the cues that are likely to be detected by monitoring processes, as well as examples of the outcomes of control processes.
Levels of MonitoringIndicative Monitoring ProcessesExample Cues Detected by Monitoring Processes
Self-Monitoring
An individual’s perception of their own performance.
An individual’s generation of an: initial judgment of solvability; feeling of rightness; feeling of error; feeling of warmth; intermediate confidence or uncertainty; final confidence; final judgment of solvability.An individual’s sensitivity to: processing fluency (ease of processing); perceived features of the presented task; perceived task complexity; study time; response time.
Other Monitoring
An individual’s perception of the performance of others.
An individual’s perception of someone else’s: initial judgment of solvability; feeling of rightness; feeling of error; feeling of warmth; intermediate confidence or uncertainty; final confidence; final judgment of solvability.
An individual’s perception of: alignment/misalignment.
An individual’s sensitivity to someone else’s: processing fluency (ease of processing); perceived features of the presented task; perceived task complexity; study time; response time; degree of agreement; level of understanding (potentially made manifest by language markers such as hedge words and pronoun use).
Joint Monitoring
The unified perception of collective performance.
A group’s unified perception of: initial judgment of solvability; feeling of rightness; feeling of error; feeling of warmth; intermediate confidence or uncertainty; final confidence; final judgment of solvability.
A group’s perception of: alignment/misalignment.
A group’s unified perception of: processing fluency (ease of processing); perceived features of the presented task; perceived task complexity; study time; response time; degree of agreement; level of understanding (potentially made manifest by language markers such as hedge words and pronoun use).
Levels of ControlIndicative Control ProcessesExample Outcomes of Control Processes
Self-Focused Control
An individual’s decisions about how to progress or terminate their own reasoning.
An individual’s procedural decision to engage in: memory search; reasoning, problem solving or decision making; response generation; response evaluation; strategy change; giving up; help-seeking.An individual’s generation of: recalled information; an intermediate or final response (e.g., a solution, option or decision, including the decision to give up); an evaluation of an intermediate or final response; a new process or strategy (e.g., analogising, mental simulation); a request for help.
Other-Focused Control
An individual’s decisions about how to control the performance of others.
An individual’s procedural decision to engage in: affirmation; encouragement; persuasion; argumentation; negotiation; manipulation; deception.An individual’s generation of: alignment (e.g., situation model alignment and linguistic alignment); intersubjectvity; shared understanding; common ground; conflict resolution; misalignment; disalignment.
Joint Control
The unified control of decisions regarding how to advance or terminate collective performance.
A group’s unified procedural decision to engage in: memory search; reasoning, problem solving or decision making; response generation; response evaluation; strategy change; giving up; help-seeking.A group’s unified generation of: recalled information; an intermediate or final response (e.g., a solution, option or decision, including the decision to give up); an evaluation of an intermediate or final response; a new process or strategy (e.g., analogising, mental simulation); a request for help.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Richardson, B.H.; Ball, L.J. Progressing the Development of a Collaborative Metareasoning Framework: Prospects and Challenges. J. Intell. 2024, 12, 28. https://doi.org/10.3390/jintelligence12030028

AMA Style

Richardson BH, Ball LJ. Progressing the Development of a Collaborative Metareasoning Framework: Prospects and Challenges. Journal of Intelligence. 2024; 12(3):28. https://doi.org/10.3390/jintelligence12030028

Chicago/Turabian Style

Richardson, Beth H., and Linden J. Ball. 2024. "Progressing the Development of a Collaborative Metareasoning Framework: Prospects and Challenges" Journal of Intelligence 12, no. 3: 28. https://doi.org/10.3390/jintelligence12030028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop