• Reference Manager
  • Simple TEXT file

People also looked at

Original research article, the role of general and selective task instructions on students’ processing of multiple conflicting documents.

psychology task instruction

  • 1 Research Unit on Reading, ERI Lectura, University of Valencia, Valencia, Spain
  • 2 Pureza de María-Grao School, Valencia, Spain

This study was designed to test the role of general and selective task instructions when processing documents, which vary as regards trustworthiness and position toward a conflicting topic. With selective task instructions, we refer to concrete guidelines as how to read the texts and how to select appropriate documents and contents, in contrast to general task instructions. Sixty-one secondary school students were presented with four different conflicting documents in an electronic learning environment and were told to write an essay based on the information from the texts. Only half of the students were told to only use information from two out of the four texts to write their essay (i.e., selective condition). As predicted, students told to focus on specific documents and not use all of them for the assigned task (i.e., selective condition) better discriminated the quality of documents and type of information for the task.

Imagine that a group of adolescent students are given a set of texts on a current controversial topic (e.g., advantages and disadvantages of participating in social media). Having these documents available, they are asked to write an argumentative essay explaining the possible advantages and disadvantages of participating in social networks, as part of their class activities. We might encounter this second scenario. Another group of secondary school students is given the same set of texts, but they are instructed to use information only from the documents they deem relevant for the task and not from the rest of documents (general vs. selective task instructions) and perform their task only from that specific set of selected documents.

The above two cases reflect situations that students could face while reading on the Internet on conflicting topics, that is, (1) to solve tasks that require the selection of relevant content from a set of texts, (2) to complete the tasks under different types of instructions (i.e., from more general to more selective), and (3) to read and extract the information from texts varying in their levels of trustworthiness and position toward a topic. In these scenarios, specific competences associated with the use of advanced literacy skills would be essential, as described in recent models of functional reading, such as RESOLV (i.e., Britt et al., 2018 ).

Adolescents use the Internet to complete their school assignments, and for personal enjoyment ( Leu and Maykel, 2016 ). The successful handling of complex sets of information would be an essential requirement for adolescent students dealing with complex documents and the Internet ( Rouet and Potocki, 2018 ). However, when adolescents attempt to learn or solve a task via the Internet, it may be difficult to decide which information sources to trust and it becomes critical for young readers to employ effective strategies to deal with the documents on the web ( Alexander and The Disciplined Reading and Learning Research Laboratory, 2012 ; Goldman and Scardamalia, 2013 ; Bråten et al., in press ).

Thus, research on how specific instructions and task orientations might support this process becomes especially relevant, as we will discuss in this paper. Specifically, we are concerned with how different task directions might influence students’ critical evaluation of documents (see section “Critical Evaluation of Multiple Documents”). This would add to a growing body of research on how task instructions influence students’ reading of multiple documents in functional reading scenarios (i.e., Rouet and Potocki, 2018 ) and would have clear practical implications for teachers designing learning situations based on multiple documents. We will discuss these issues next.

Critical Evaluation of Multiple Documents

Given the possibilities the Internet provides nowadays for accessing different and varied set of documents, students would need to develop the skills to access, integrate, and evaluate information critically ( Rouet and Potocki, 2018 ). One first component of critical evaluation of multiple documents is the analysis of source features (i.e., Bråten et al., 2009 ; Stromso et al., 2010 ). Source features can include any metadata embedded within or provided outside the body of semantic information that informs on a text’s origin, context, and purpose ( Barzilai and Strømsø, 2018 ). Source information should be jointly considered with content information ( Rouet, 2006 ). Thus, young readers should be capable of identifying different documents varying regarding different parameters such as trustworthiness and level of adjustment to task. Readers would prioritize that content which is regarded reliable and fitting the expectations from the assigned task, whereas they would disregard content from non-reliable and not related-to-task sources.

Research has extensively focused on how adolescent students consider source information (i.e., Stromso et al., 2010 ; Strømsø et al., 2013 ; Bråten et al., 2018 ). How readers make document selections based on source information, content information, or both ( McCrudden et al., 2016 ) has also been analyzed in the literature. According to this research, the analysis of source features is critical as it would enable readers to evaluate and interpret semantic content provided across different texts ( Braasch and Graesser, in press ), and to organize mental representations of what was read ( Rouet et al., 2016 ; Braasch and Bråten, 2017 ; Saux et al., 2017 ).

Consequently, when reading multiple documents on the Internet for school purposes, it would be strategic to be selective and focus on task-related documents. However, the analysis of source information is scarce in secondary school students (i.e., Goldman and Scardamalia, 2013 ). In fact, adolescents would rarely attend to, evaluate, and use information about source features when reading on the Internet ( Britt and Aglinskas, 2002 ; Walraven et al., 2009 ; Braasch et al., 2013 ). This is why empirical studies aiming to clarify task conditions that would promote critical evaluation of documents become especially relevant and with direct educational implications.

Consequently, specific interventions, such as which type of instructions are given for reading, may impact how students discriminate the trustworthiness of a document and its task-based relevance ( Macedo-Rouet et al., 2013 ). Several theoretical models describe that competent readers initiate reading by interpreting goals, plans, and values for assigned or self-generated reading experiences ( Brand-Gruwel et al., 2009 ; Goldman et al., 2010 ; Rouet and Britt, 2011 ; Leu et al., 2013 ). These interpretations would guide the location and selection of documents to read ( Brand-Gruwel et al., 2009 ; Goldman et al., 2010 ; Rouet and Britt, 2011 ; Leu et al., 2013 ). Specific task directions might support the selection and use of multiple documents for completing school assignments, as we will discuss in this paper.

Recently, Pérez et al. (2018) conducted an intervention study to enhance teenagers’ capability to evaluate information quality, focusing on source reliability. Trained students increased the references made to reliable sources in a transfer task presenting contradictory information across texts. This finding indicates that, when specifically supported under specific task instructions or training programs, teenagers might critically evaluate information and make informed decisions regarding document relevance.

A second relevant component of critical evaluation of multiple documents relates to how multiple conflicting documents presenting opposing views are processed. Independent of the nature of the controversy or students’ previous positions, textual documents might present different and opposing views on a specific topic, as expressed by the authors of these documents. When readers access these, they might need to identify conflicting arguments and integrate them to form a coherent view (i.e., Perfetti et al., 1999 ). However, there is evidence that readers have difficulties in identifying the different arguments to construct an integrated view. Specifically, readers fail to integrate information from different positions of a controversial issue or topic into their mental models, leading to a one-sided representation of the controversy ( Britt et al., 1999 ). In such situations, readers might have been affected by a text-belief consistency effect ( Richter and Maier, 2017 ), by which information which aligns to readers’ previous beliefs, previous knowledge, or perspective might be processed with further detail and better remembered.

To explain the effects that beliefs have on the comprehension of controversial topics in multiple documents, Richter and Maier (2017) proposed the Two-Step Model of Validation. According to this model, in a first step of routine validation, readers encounter text-belief consistent information, which would affect memory and comprehension in terms of a text-belief consistency effect. In a second step, however, if readers engage in strategic and deliberate processing of inconsistent information, they could process with detail the information, which goes counter to their initial position or beliefs. In sum, engaging readers in active processing of the inconsistent information in step two would facilitate the construction of an integrated mental model from a set of controversial texts. Thus, if students engage in effortful, elaborative processing to understand the key differences among a set of controversial texts, this appears to produce a more balanced representation in long-term memory, as has been shown with adults ( Wiley, 2005 ; Maier and Richter, 2013 ). Thus, we might enhance the processing of inconsistent information by instructing readers to process a set of texts under different task instructions which would promote active processing of the texts, as we will argue in the next section.

Impact of Task Instructions on Students’ Processing of Multiple Conflicting Documents

Reading might be regarded as a goal-directed activity ( McCrudden and Schraw, 2007 ), and different task directions could be presented in the context of school-based assignments to facilitate the students’ processing of a set of controversial texts. McCrudden and Schraw (2007) proposed a general descriptive model of goal-focusing processes in reading. Within this model, relevance instructions, which help to allocate the reading resources strategically, play an important role in learning from text and range from more specific to more general. They describe two types of relevance instructions. Specific relevance instructions highlight discrete text elements, whereas general relevance instructions would cover broad themes, purposes, or contexts for reading (prompts that ask readers to read for a general reason, such as writing a summary).

Similarly, the Task-based Relevance and Content Extraction (TRACE) model ( Rouet, 2006 ) signals the importance of the processing of task instructions and how these would guide the strategic decisions students undertake to use a set of documents to solve a specific reading goal. Furthermore, the RESOLV model proposed by Britt et al. (2018 ; see also Rouet et al., 2017) deepens into the role of task instructions. Based on the task directions received, readers would construct a task model which would incorporate the plans to solve the reading assignment and would be updated throughout the reading task.

In multiple documents reading situations, different types of reading instructions and tasks have been shown to facilitate the identification of the different opposing views of a specific topic and the creation of an integrated mental model of documents ( Wiley and Voss, 1999 ; Cerdán and Vidal-Abarca, 2008 ). For instance, in Maier and Richter (2016) , undergraduates read controversial texts on the topic of health risks caused by cell phones. Participants either read the texts with the goal to write a summary or an argumentative essay. Reading times were collected. When reading to summarize , longer reading times were identified for belief-consistent information. When reading to write an argumentative essay , cognitive resources were allocated in a more balanced way and equal reading times were identified for belief-consistent and belief-inconsistent texts. Similarly, McCrudden and Sparks (2014) made high-school students read a dual-position text that contained arguments for and against widening a local tunnel. Two types of reading instructions were provided: either to focus on evidence and reasons vs. no instruction . Most participants were in favor of widening the tunnel. After reading the materials, their beliefs were weaker when reading under the evidence instruction. No changes in beliefs were observed for the other group.

In sum, there is emerging evidence in the literature that informs how readers’ processing of multiple conflicting documents under different task instructions influences students’ critical reading. However, more research is needed to explain how receiving different types of reading instruction affects students’ processing and attention to relevant document features and strategic reading decisions when reading from several documents. This is precisely the aim of this study and the main contribution of this paper to the continuously growing body of literature on multiple documents and advance literacy skills, in general. We will present the study next.

The Current Study

In this paper, we will focus on the role of two types of task instructions (i.e., general and selective task instructions) when selecting and processing documents each of them varying as regards trustworthiness and content type (i.e., position toward a topic). With selective task instructions , we are referring to concrete guidelines that prompt readers to select appropriate documents and contents (i.e., please use information only from two documents which might help you accomplish an essay task), in contrast to general task instructions (i.e., presenting a set of documents students can refer to for completing an essay task). Past research has especially analyzed how students interpret task instructions (i.e., Llorens and Cerdán, 2012 ) and the impact of different types of instructions for reading single documents (i.e., Cerdán et al., 2009 ). However, the analysis of how varying instruction affects the selection and processing of multiple conflicting documents is novel.

Students were presented with four documents that contained different views on the same topic (i.e., participation in social networks). Each of the documents varied as regards level of trustworthiness (i.e., documents from trustworthy and untrustworthy sources) and position on the topic (in favor, opposed), so every text offered a unique combination of level of trustworthiness and position. Students were instructed to read the texts to write an argument answering the question: “ Would you recommend participating in social networks? Elaborate your answer using the information from the texts. ” To facilitate the selection of documents according to level of trustworthiness and content type, we asked half of the students to use information only from two of the four documents that were available (i.e., selective task instruction ). This condition would provide students with more specific academic instructions for the task that would likely: (1) increase students’ consideration of the trustworthiness of sources and (2) facilitate the discrimination of the type of information presented in the texts.

We predicted that selective task instructions (i.e., to use information from only two documents from a wider set to write an argumentative essay) would increase students’ active engagement in the analysis of critical document features such as source dimensions and content type, in contrast to a more open general instruction (i.e., to read a set of documents to write an argumentative essay). This active processing of document features fostered by a selective task instruction might make students prioritize the processing of trustworthy documents and content showing an inconsistent view, that is, content showing a different position to that held initially by students (i.e., Cerdán and Vidal-Abarca, 2008 ; Richter and Maier, 2017 ), facilitating the construction of a balanced mental model from the documents in conflict.

Materials and Methods


Sixty-one high-school school students, with a mean age of 16 years ( M = 16.67, SD = 0.68), participated in the study. They were of Caucasian origin and gender-balanced. They were selected from two classes of first year of secondary school, from an urban secondary school from a southern European country. The experiment was contextualized as part of their class in Spanish language, where training on document use and advanced reading skills becomes essential. Within each class, they were randomly assigned to a general (use information from all the documents to write the essay, N = 32) or selective condition (use information from only two documents to write the essay, N = 29). Participants assigned to the two conditions did not differ in their comprehension skills ( p > 0.05), as measured by a standardized comprehension test (TEC, Martinez et al., 2008 ).

The study was approved by the school direction committee. In addition, parents provided informed written consent to participate in the study. This study was carried out following the recommendations of the Research Ethics Committee of the University from the first author. To preserve the privacy of participants, no personal data were required. Instead, code identifiers were used to gather the different learning products.

Texts and Tasks

Four texts of approximately 300 words each on the topic of social networks were selected, presenting the positive and negative aspects of participating in social networks. The selected pool of texts was regarded as controversial, given the type of information and authors’ position on the selected topic, which varied across texts. Thus, independent of the researchers or students’ perspective or initial position toward the topic, the texts selected texts presented a controversy in itself, as they supported or not with divergent arguments the participation in social networks.

The texts uniquely varied according to two dimensions: level of trustworthiness (e.g., trustworthy vs. untrustworthy) and position on the topic (i.e., showing advantages vs. disadvantages). All texts were selected from original web sources but were adapted for research purposes. The first text was selected and adapted by the experimenters from a web site from the WHO (World Health Organization), and mainly showed a positive view, consistent with the participants’ perspective (Title: “ WHO clarifies the public the benefits of social networks ”). It was classified as trustworthy and advantages text. The second text, also from a trustworthy source, would present the opposite perspective (trustworthy and disadvantages text, Ministry of Industry alerts of the risks of social networks ). The third text would come from an untrustworthy source (personal blog with no credentials, Jonathans’ Blog ) and mainly listed a set of advantages when participating in social networks ( Jonathan explains the benefits of participating in social networks ). Finally, the fourth text also came from an untrustworthy source (i.e., Lazy Corner ) and presented a negative view on the topic ( Lazy corner: main disadvantages and risks of social networks ).

The level of trustworthiness was independently rated by means of a ranking task by expert teachers . A pool of six secondary school teachers were given the title, source, and body of the four selected texts. They were asked to rank order the texts according to their estimated level of trustworthiness. The level of agreement in their rank orders was greater than 95%, and the two highest ranked texts were coded as trustworthy, and the other two as untrustworthy. Discrepancies were solved through discussion with the main authors of this article.

The texts also varied as regards position on the topic: two showing primarily advantages of participating in social networks, the other two highlighting the main risks and disadvantages. To determine this dimension, a content analysis was performed by the two authors of this paper. Any document including key words or self-disclosing statements (in title and body of text) signaling the authors’ position toward the issue of social networks (i.e., such as risks, benefits, advantages, disadvantages) would be classified either as an advantages document or as a disadvantages document . Each of the authors independently classified each of the four texts according to these parameters. Agreement was higher than 95%. Minor discrepancies were solved through discussion. Finally, in order to independently validate the classification of the textual materials regarding trustworthiness and position on the topic, the texts were further read and assessed by two independent researchers familiarized with multiple documents research. They were given the texts and were asked to classify each of them regarding the two dimensions. The level of agreement with the initial classification was above 98%, and only minor discrepancies were solved through discussion.

The task consisted in solving an open-ended question having the texts available by providing a recommendation for the participation or not in social networks, based on the students’ reading of the texts. Half of the students were told to use information from only two of the four documents to answer the question (i.e., selective task instruction ), whereas the rest of the students would follow the general indication to read the four available texts and answer the task presented to them (i.e., general task instruction ) Please note that the literal wording of the general instruction was as follows: “Below you will find a list of four texts on the topic of social networks that you can access afterwards to complete your task. Please read the texts following the order you wish and answer the following question: Would you recommend participating in social networks? Elaborate your answer using the information from the texts.” Only those students in the selective condition received this specific task instruction, in addition to the previous information: “Please use information from only two of the four available documents to complete your task.”

Both the reading of texts and the answering of the task were done using Read&Answer ( Vidal-Abarca et al., 2011 ). This software allows to register on-line reading behavior in a controlled manner, while simultaneously embedding the tasks to be performed.

Participants were presented with an instructions page where they read the task according to the assigned condition (i.e., general, selective). The four documents were listed below in a google-like display (list of documents available, with references to source and content), and students were told to select and read the sources they considered relevant for their question, following the order they wished. Only participants in the selective condition were asked to use information from only two of the four available documents, even though they were allowed to access and read the whole set of texts.


The four texts and the task were presented to the students using Read&Answer . This software presents the texts and the task on the computer screen and it registers the students’ on-line activity. Read&Answer has successfully been used in similar task-oriented reading scenarios ( Cerdán and Vidal-Abarca, 2008 ; Cerdán et al., 2009 ; Vidal-Abarca et al., 2011 ).

Read&Answer presents readers with a screen showing the full text. All text but the unit currently selected by the reader are masked. Readers unmask a unit by clicking on it; when they unmask another unit, the first one is remasked. In the present experiment, information was divided in paragraphs. Specifically, every text was divided into four different paragraphs. Readers could access the task screen, including the wording of the task and a space to answer, from the text screen. It is possible to move from the question screen to the texts screen, and vice versa. Read&Answer permits the inclusion of more than one text, as we did in this experiment.

In this experiment, students first viewed a table of contents including the instructions for reading and for performing the task. In addition, students could view the list of texts in the form of a google-like list. This list of documents included content and trustworthiness cues, which should guide students’ decision to access and read specific documents. Through links included in the main screen, students could either go to the task screen or navigate across the four texts.

The experiment included two sessions of approximately 1 h each. They took place within the students’ regular class activity and thanks to the collaboration of the school teachers. On the first session, participants completed a comprehension test and were then assessed on their general position toward the topic of study. This was done on paper. By means of a general inquiry question (i.e., what is your view about social networks? ), students were measured their general position toward the conflicting set of documents. Students had to respond by marking one of the following elements: In favor/against/no opinion . It should be noted that 95% of the students signaled the in favor option, whereas only 5% of the sample signaled no opinion. The second session was performed on the computer using Read&Answer ( Vidal-Abarca et al., 2011 ).

Participants were presented with an instructions page where they read the task according to the assigned condition (i.e., general, selective). The four documents were listed in the form of a google-like display (list of documents available, with references to source and content type), and students were told to select and read the documents that better helped them solve their task. Students were placed no time limit, and were allowed to read the texts and perform the task at their will, having the documents available.

Students could access the texts and the task screen following the order they wished. Only participants in the selective condition were told to use information from only two of the four available documents when elaborating the answer. That is to say, they might read documents to discard information, but the content included in the answers should be extracted and elaborated from only two documents.

Analysis and Results

Impact of general vs. selective task instructions on students’ on-line reading behavior in multiple documents settings.

We first analyzed a set of on-line measures that would represent students’ strategic reading when dealing with multiple documents, under the influence of different types of instructions and documents varying on level of trustworthiness and content type. In order to test these effects, several one-way ANOVAs were conducted, with independent factor type of instruction ( Selective vs. General ) and dependent variables, the list of behavioral indicators which appear below, which would consistently consider the type of text (trustworthy vs. untrustworthy) and the position on the topic (advantages vs. disadvantages). SPSS IBM statistics v24 was used as statistical package.

Time Reading Table of Contents and Links

We analyzed the time (in milliseconds) reading the different types of links included in the table of contents. These measures would reflect students’ analysis of the different texts in terms of relevance for the task’s goal and level of trustworthiness. They should differ depending on the general vs. selective task instruction. The following specific measures were analyzed: Time reading advantages text links; time reading disadvantages text links, time reading trustworthy links, time reading untrustworthy links and total time reading all the links ( see Table 1 ). Significant results were found for: time reading disadvantages links, time reading trustworthy links and total time reading all the links .


Table 1 . Means and standard deviations for table of contents reading times (in milliseconds).

Regarding time reading disadvantages links , students in the selective condition read them for longer ( M = 36520.89, SD = 21844.45) in contrast to those assigned to the general group ( M = 18251.69, SD = 17842.64), F (1, 57) =13.35, p = 0.001, partial η 2 = 0.19. This reflects the greater awareness of students instructed to be more selective in their decision to focus on information which was probably new for them and not supporting their initial positive view toward the benefits of social networks.

For the variable time reading trustworthy links, significant differences were found between the general ( M = 19098.19, SD = 18474.41) and selective condition ( M = 34807.14, SD = 22546.30). Students told to use information from only two documents to perform the writing task were more aware of the levels of trustworthiness that documents presented. Hence, awareness of sources was increased in the selective task instruction, as predicted, F (1, 57) = 8.74, p = 0.005, partial η 2 = 0.13.

Finally, students in the selective task instruction dedicated longer time to reading all the links ( M = 115050.64, SD = 57578.80), in contrast to the general instruction ( M = 78250.31, SD = 53541.46), F (1, 57) = 6.74, p = 0.012, partial η 2 = 0.107. This result reflects the greater processing effort in the table of contents for those in the selective group. Given they were told to make a decision on the relevance of documents for their task and use information from only two documents to elaborate their essay, they dedicated greater resources to reading the titles of documents, where information about type of content and trustworthiness of each of the texts could be found. The extent to which this prior analysis had an impact on actual reading times and on the task should be observed by taking into account other on-line and off-line measures. We will see measures related to the reading process (i.e., reading of documents) and to the elaboration of the essay task next.

Time Reading Texts

We analyzed the amount of time (in milliseconds) dedicated to reading the different texts, as an indicator of the processing effort in reading the documents. It should be taken into account that this measure included both the initial time reading the documents before the task, and the possible revisits during task completion. Students in the selective group were allowed to read all the documents, but use only information from two documents when writing the answer.

The following reading time measures were analyzed: time reading advantages texts, time reading disadvantages texts, time reading trustworthy and untrustworthy texts, total time reading all texts (see Table 2 ). Significant effects were found for: time reading advantages and disadvantages texts.


Table 2 . Means and standard deviations for texts reading times (in milliseconds).

Texts showing main advantages of participating in social networks were read for longer by students in the general instruction condition ( M = 213383.00, SD = 116265.38), in contrast to students in the selective instruction group ( M = 158787.55, SD = 91155.09), F (1, 57) = 3.98, p = 0.051, partial η 2 = 0.065. The precise opposite pattern was found for the texts showing disadvantages, F (1, 57) = 6.45, p = 0.014, partial η 2 = 0.102, for selective ( M = 276509.34, SD = 98021.309) and general group ( M = 207367.94, SD = 110969.77), respectively. These results show how the selective instruction seemed to induce a focusing effect on a type of information that goes counter to the initial positive view that students had, as measured in this experiment, toward the use of social networks, thus, showing the facilitative effect of instructions enhancing critical analysis of texts to focus on inconsistent information, as predicted by Richter and Maier (2017) .

No significant differences were found across groups for the variable time reading all the documents. In order to test if the instruction to focus on two texts was followed by the selective group, we complementary considered the mean number of texts accessed in the experimental session. Significant differences were obtained between the general group ( M = 2.72, SD = 0.92) and the selective condition ( M = 1.83, SD = 0.38), F (1, 57) = 22.35, p < 0.05, partial η 2 = 0.28. These two measures jointly considered show that participants generally dedicated a similar amount of time to reading all the documents, regardless of condition. However, differences seemed to appear as regards pattern of reading (i.e., prioritizing documents, as shown by the number of texts measure) and which information they decided to use in their task, as we will see in the next set of measures.

Impact of General vs. Specific Task Instructions on Task Performance When Writing Based on Multiple Conflicting Documents

We analyzed the type of idea included in the task, by using a coding system similar to that successfully used in previous studies in the area of multiple documents based on the analyses of independent ideas (i.e., Cerdán and Vidal-Abarca, 2008 ; Linderholm et al., 2016 ). Two experimenters coded all the protocols, with an interrater agreement higher than 90%. Similar to the on-line behavioral measures, several one-way ANOVAs were conducted, with independent factor type of instruction ( Selective vs. General ) and dependent variables, the list of learning indicators which appear below. SPSS IBM statistics v24 was used as statistical package.

We focused on the following variables, relevant for our design (see Table 3 ): ideas showing advantages; ideas showing disadvantages; ideas from trustworthy documents; and ideas from untrustworthy documents . Specifically, we computed the absolute frequency with which these different ideas appeared in students’ essays. In addition, we counted the number of explicit references to sources (title and author information), mainly trustworthy and untrustworthy. This latter variable should reflect how students encoded the different documents and which ones they were prioritizing.


Table 3 . Means and standard deviations for task measures (absolute frequencies).

Significant effects were found for the variables ideas showing advantages and ideas from untrustworthy documents . The general condition included a higher number of ideas showing advantages ( M = 3.47, SD = 1.70), in contrast to the selective group ( M = 2.00, SD = 1.58), F (1, 57) = 11.57, p = 0.001, partial η 2 = 0.169. In addition, the general group also included more ideas from untrustworthy documents ( M = 3.78, SD = 2.78), differently to the selective group, ( M = 2.14, SD = 2.56), F (1, 57) = 5.43, p = 0.023, partial η 2 = 0.087. Finally, a greater number of explicit references to sources was found for the selective group ( M = 0.62, SD = 0.90) in comparison to the general condition ( M = 0.22, SD = 0.49), F (1, 57) = 4.69, p = 0.035, partial η 2 = 0.076. The differences might be even larger, as the overall texts to cite in each condition (two for the selective vs. two for the general group) were not considered in the calculation of the dependent measure. In general, this latter result supports the initial prediction that students, when given concrete and specific task instructions, seem to be able to pay greater attention to the type of document they should use for a precise task, being able to better differentiate between more and less trustworthy documents.

We design this study to analyze the role of general and selective task instructions when selecting and reading documents that vary as regards trustworthiness and position on a topic. With specific task instructions, we refer to concrete guidelines as how to read the texts and how to select appropriate documents and contents, in contrast to general task instructions. How to provide appropriate orientations to students to facilitate performance and learning from texts seems particularly relevant, both empirically and in practical terms, as it might affect the instructors’ daily practices. Past research has especially analyzed how students interpret task instructions (i.e., Llorens and Cerdán, 2012 ) and the impact of different types of instructions for reading single documents (i.e., Cerdán et al., 2009 ). However, the analysis of how varying instructions affect the selection and processing of multiple conflicting documents in electronic learning environments is novel, in our view.

Students were presented with four different types of documents on the computer screen and were told to write an essay based on the information from the texts. Only half of the students were told to only use information from two out of the four texts to write their essay (i.e., selective condition), whereas this specific instruction was not provided to the general group. The texts dealt on the topic of social networks, and varied as regards trustworthiness (i.e., trustworthy and untrustworthy) and position on the topic (i.e., showing advantages or disadvantages). We predicted that a selective task instruction to read a set of conflicting documents would especially facilitate students’ attention and deeper processing of trustworthy documents, which should be apparent both when reading the documents and when writing the task assignments. Training programs enhancing critical reading strategies have successfully increased sourcing behaviors in adolescents ( Pérez et al., 2018 ). In addition, we hoped that a selective task instruction to focus on a subset of given documents to perform a task would make students especially process information presenting a different view to that held initially by students. According to Richter and Maier (2017) , task instructions that make students read information critically would help to overcome a text-belief consistency effect. We were also hoping that a selective instruction would make students especially select and process more deeply those texts showing the disadvantages toward social networks, a type of information that would complement the initial positive position held by students.

We analyzed students’ reading behavior in the table of contents page and when reading the documents (i.e., reading time measures). We also considered the type of ideas included in their tasks and the inclusion of explicit references to sources. On-line reading patterns supported our hypothesis that students with selective reading instructions dedicated greater resources to initial reading of table of contents, especially. In the analysis of reading times of the actual texts, these students showed also a tendency to focus on information which seemed to go counter to their initial positive view toward the topic. Students were presented with four different types of documents on the computer screen and were told to write an essay based focus on new and inconsistent information ( Richter and Maier, 2017 ), which would show that students in this condition read in a more selective and critical manner. Finally, the complementary analysis of task products reinforced the prediction that providing selective task instructions helps students to write based on information from trustworthy documents and include relevant information. In this article, the main focus of analysis was the processes observed while reading on-line and registered through the tool Read&Answer ( Vidal-Abarca et al., 2011 ). Similar to other research using the same tool and a similar methodological approach (i.e., Cerdán et al., 2011 ), our main goal was to identify a set of reading strategies under different experimental conditions. Complementary, task products were analyzed, which helped to further interpret the experimental manipulations. Future designs on the impact of task manipulations could focus specifically on the relationship between on-line measurements and their differential contribution to task and learning outcomes.

In sum, our results show that students who were indicated to use two of the four documents for the task were more selective when deciding to read only those documents which had two characteristics: (1) they were more trustworthy and (2) they were against their initial positive view toward participating in social networks. This was reflected in their reading behavior. In addition, these students also included more ideas in their answers from trustworthy documents, as well as they included more references to sources. These results suggest that providing specific indications to students to foster their reflection on what document might be appropriate for the task seems to be an effective means of increasing students’ discrimination of sources and enhances critical reading of multiple conflicting documents. This result is especially relevant nowadays, as secondary school students will encounter different types of documents, to be used for different purposes ( Rouet and Potocki, 2018 ). The analysis of which type of instructions and strategies will help students be more critical in the process of selecting a concrete document for a specific purpose is certainly essential, especially when reading on-line.

This research has some limitations, which we acknowledge. First, we did not measure students’ perceived sense of difficulty with appropriate measures such as the Cognitive Load questionnaire for Multiple Document Reading (CL-MDR, Cerdan et al., 2018 ). The effects found in the selective condition might have been due to a reduction in Cognitive Load due to the fewer number of texts to focus on, and not to the specific instruction in itself. Future research might consider measuring students’ level of perceived effort by means of valid instruments. Second, we did not measure the extent to which students were capable of identifying and integrating the different arguments that were included in the documents in a final learning measure, which is a critical aspect in multiple documents learning. In this study, our focus was mainly the analysis of specific ideas which would be included in the open-response, based on the inspection of the documents (i.e., Cerdán and Vidal-Abarca, 2008 ). The availability of documents while composing the task allowed us to determine with significant levels of precision the origin of these ideas in terms of the type of document, which was the ultimate goal of this study. However, the analysis of integrative aspects of essay writing in studies based on multiple documents is truly a need and a challenge for future studies ( Primor and Katzir, 2018 ) which we acknowledge in this paper.

Moreover, the extent to which the students viewed the selected topic as controversial was not measured initially with flexible measurements, nor the degree of change after the experiment. Finally, there might be individual differences in how instructions were understood, which we did not consider. In fact, the specific wording of the instructions might influence students’ response and learning behavior (i.e., how open or narrow the instruction is worded). Future research should help to clarify these issues, overcome some of the previous limitations, as well as focus on the effects of different types of instructions in open web-based environments. This way, the situations we test might resemble more those reading tasks which students might encounter when reading for academic purposes or for leisure.

Ethics Statement

The study was approved by the school principal. Both the school and parents provided informed written consent to participate in the study. This study was carried out in accordance with the recommendations of the Research Ethics Committee of the University from the first author. No approval was required as per institutional as well as national guidelines and regulations. In order to guarantee the privacy of participants, no personal data were provided. Code identifiers were used in order to gather the different learning materials from the students.

Author Contributions

RC is mainly responsible for designing and conducting the study. MM was involved in data analysis and interpretation of results.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Alexander, P. A.The Disciplined Reading and Learning Research Laboratory (2012). Reading into the future: competence for the 21st century. Educ. Psychol. 47, 259–280. doi: 10.1080/00461520.2012.722511

CrossRef Full Text | Google Scholar

Barzilai, S., and Strømsø, H. I. (2018). “Individual differences in multiple document comprehension” in Handbook of multiple source use . eds. J. L. G. Braasch, I. Bråten, and M. T. McCrudden (New York, NY: Routledge).

Google Scholar

Braasch, J. L. G., and Bråten, I. (2017). The discrepancy-induced source comprehension (D-ISC) model: basic assumptions and preliminary evidence. Educ. Psychol. 52, 167–181. doi: 10.1080/00461520.2017.1323219

Braasch, J. L. G., Bråten, I., Strømsø, H. I., Anmarkrud, Ø., and Ferguson, L. E. (2013). Promoting secondary school students’ evaluation of source features of multiple documents. Contemp. Educ. Psychol. 38, 180–195. doi: 10.1016/j.cedpsych.2013.03.003

Braasch, J. L. G., and Graesser, A. C. (in press). “Avoiding and overcoming misinformation on the internet” in Critical thinking in psychology. 2nd Edn. eds. R. J. Sternberg and D. F. Halpern (Cambridge, UK: Cambridge University Press).

Brand-Gruwel, S., Wopereis, I., and Walraven, A. (2009). A descriptive model of information problem solving while using internet. Comput. Educ. 53, 1207–1217. doi: 10.1016/j.compedu.2009.06.004

Bråten, I., Braasch, J. L. G., and Salmerón, L. (in press). “Reading multiple and non-traditional texts: new opportunities and new challenges” in Handbook of reading research . Vol. V, eds. E. B. Moje, P. Afflerbach, P. Enciso, and N. K. Lesaux (New York: Routledge).

Bråten, I., Brante, E. W., and Strømsø, H. I. (2018). What really matters: the role of behavioural engagement in multiple document literacy tasks. J. Res. Read. 41, 680–699. doi: 10.1111/1467-9817.12247

Bråten, I., Stromso, H., and Britt, A. (2009). Trust matters: examining the role of source evaluation in students’ construction of meaning within and across multiple texts. Read. Res. Q. 44, 6–28. doi: 10.1598/RRQ.44.1.1

Britt, M. A., and Aglinskas, C. (2002). Improving students’ ability to identify and use source information. Cogn. Instr. 20, 485–522. doi: 10.1207/s1532690xci2004_2

Britt, M. A., Perfetti, C. A., Sandak, R., and Rouet, J. F. (1999). Content integration and source separation in learning from multiple texts. eds. S. R. Goldman, A. C. Graesser, and P. van den Broek (Mahwah, NJ: Erlbaum). Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso (209–233).

Britt, M. A., Rouet, J.-F., and Durik, A. M. (2018). Literacy beyond text comprehension: A theory of purposeful reading . New York: Routledge.

Cerdan, R., Candel, C., and Leppink, J. (2018). Cognitive load and learning in the study of multiple documents. Front. Educ. 3:59. doi: 10.3389/feduc.2018.00059

Cerdán, R., Gilabert, R., and Vidal-Abarca, E. (2011). Selecting information to answer questions: strategic individual differences when searching texts. Learn. Individ. Differ. 21, 201–205. doi: 10.1016/j.lindif.2010.11.007

Cerdán, R., and Vidal-Abarca, E. (2008). The effects of tasks on integrating information from multiple documents. J. Educ. Psychol. 100, 209–222. doi: 10.1037/0022-0663.100.1.209

Cerdán, R., Vidal-Abarca, E., Martínez, T., Gilabert, R., and Gil, L. (2009). Impact of question-answering tasks on search processes and reading comprehension. Learn. Instr. 19, 13–27. doi: 10.1016/j.learninstruc.2007.12.003

Goldman, S. R., Lawless, K. A., Gomez, K. W., Braasch, J. L. G., MacLeod, S., and Manning, F. (2010). “Literacy in the digital world: comprehending and learning from multiple sources” in Bringing reading research to life: Essays in honor of Isabel Beck . eds. M. C. McKeown and L. Kuncan (New York, NY: Guilford), 257–284.

Goldman, S., and Scardamalia, M. (2013). Managing, understanding, applying, and creating knowledge in the information age: next-generation challenges and opportunities. Cogn. Instr. 31, 255–269. doi: 10.1080/10824669.2013.773217

Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., and Henry, L. A. (2013). “New literacies: a dual-level theory of the changing nature of literacy, instruction, and assessment” in Theoretical models and processes of reading . 6th Edn. eds. D. E. Alvermann, N. J. Unrau, and R. B. Ruddell (Newark, DE: International Reading Association), 1150–1181.

Leu, D. J., and Maykel, C. (2016). Thinking in new ways and in new times about reading. Literacy Res. Instr. 55, 122–127. doi: 10.1080/19388071.2016.1135388

Linderholm, T., Dobson, J., and Yarbrough, M. B. (2016). The benefit of self-testing and interleaving for synthesizing concepts across multiple physiology texts. Adv. Physiol. Educ. 40, 329–334. doi: 10.1152/advan.00157.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Llorens, A. C., and Cerdán, R. (2012). Assessing the comprehension of questions in task-oriented reading. Rev. Psicodidáctica 17, 233–252. doi: 10.1387/RevPsicodidact.4496

Macedo-Rouet, M., Braasch, J. L., Britt, M. A., and Rouet, J. F. (2013). Teaching fourth and fifth graders to evaluate information sources during text comprehension. Cogn. Instr. 31, 204–226. doi: 10.1080/07370008.2013.769995

Maier, J., and Richter, T. (2013). Text-belief consistency effects in the comprehension of multiple texts with conflicting information. Cogn. Instr. 31, 151–175. doi: 10.1080/07370008.2013.769997

Maier, J., and Richter, T. (2016). Effects of text-belief consistency and reading task on the strategic validation of multiple texts. Eur. J. Psychol. Educ. 31, 479–497. doi: 10.1007/s10212-015-0270-9

Martinez, T., Vidal-Abarca, E., Sellés, P., and Gilabert, R. (2008). Evaluación de estrategias y procesos de comprensión: El test de procesos de comprensión. Infanc. Aprendizaje 31, 319–332. doi: 10.1174/021037008785702956

McCrudden, M. T., and Schraw, G. (2007). Relevance and goal-focusing in text processing. Educ. Psychol. Rev. 19, 113–139. doi: 10.1007/s10648-006-9010-7

McCrudden, M. T., and Sparks, P. C. (2014). Exploring the effect of task instructions on topic beliefs and topic belief justifications: a mixed methods study. Contemp. Educ. Psychol. 39, 1–11. doi: 10.1016/j.cedpsych.2013.10.001

McCrudden, M. T., Stenseth, T., Bråten, I., and Strømsø, H. I. (2016). The effects of author expertise and content relevance on document selection: a mixed methods study. J. Educ. Psychol. 108, 147–162. doi: 10.1037/edu0000057

Pérez, A., Potocki, A., Stadtler, M., Macedo-Rouet, M., Paul, J., Salmerón, L., et al. (2018). Fostering teenagers’ assessment of information reliability: effects of a classroom intervention focused on critical source dimensions. Learn. Instr. 58, 53–64. doi: 10.1016/j.learninstruc.2018.04.006

Perfetti, C. A., Rouet, J. F., and Britt, M. A. (1999). “Toward a theory of documents representation” in The construction of mental representations during reading . eds. H. van Oostendorp and S. R. Goldman (Mahwah, NJ: Erlbaum), 99–122.

Primor, L., and Katzir, T. (2018). Measuring multiple text integration: a review. Front. Psychol. 29, 1–16. doi: 10.3389/fpsyg.2018.02294

Richter, T., and Maier, J. (2017). Comprehension of multiple documents with conflicting information: a two-step model of validation. Educ. Psychol. 52, 148–166. doi: 10.1080/00461520.2017.1322968

Rouet, J. F. (2006). Comprehending multiple documents. The skills of document use: From text comprehension to web-based learning . Mahwah, NJ: Erlbaum.

Rouet, J. F., and Britt, M. A. (2011). “Relevance processes in multiple document comprehension” in Text relevance and learning from text . eds. M. T. McCrudden, J. P. Magliano, and G. Schraw (Greenwich, CT: Information Age), 19–52.

Rouet, J. F., Britt, M. A., and Durik, A. M. (2017). RESOLV: readers’ representation of reading contexts and tasks. Educ. Psychol. 52, 200–215. doi: 10.1080/00461520.2017.1329015

Rouet, J. F., Le Bigot, L., de Pereyra, G., and Britt, M. A. (2016). Whose story is this? Discrepancy triggers readers’ attention to source information in short narratives. Read. Writ. 29, 1549–1570. doi: 10.1007/s11145-016-9625-0

Rouet, J. F., and Potocki, A. (2018). From reading comprehension to document literacy: learning to search for, evaluate and integrate information across texts. Infanc. Aprendizaje 41, 415–446. doi: 10.1080/02103702.2018.1480313

Saux, G., Britt, A., Le Bigot, L., Vibert, N., Burin, D., and Rouet, J. F. (2017). Conflicting but close: readers’ integration of information sources as a function of their disagreement. Mem. Cogn. 45, 151–167. doi: 10.3758/s13421-016-0644-5

Stromso, H., Braten, I., and Britt, A. (2010). Reading multiple texts about climate change: the relationship between memory for sources and text comprehension. Learn. Instr. 20, 192–204. doi: 10.1016/j.learninstruc.2009.02.001

Strømsø, H. I., Bråten, I., Britt, M. A., and Ferguson, L. (2013). Spontaneous sourcing among students reading multiple documents. Cogn. Instr. 31, 176–203. doi: 10.1080/07370008.2013.769994

Vidal-Abarca, E., Martinez, T., Salmerón, L., Cerdán, R., Gilabert, R., Gil, L., et al. (2011). Recording online processes in task-oriented reading with Read&Answer. Behav. Res. Methods . 43, 179–192. doi: 10.3758/s13428-010-0032-1

Walraven, A., Brand-Gruwel, S., and Boshuizen, H. P. (2009). How students evaluate information and sources when searching the World Wide Web for information. Comput. Educ. 52, 234–246. doi: 10.1016/j.compedu.2008.08.003

Wiley, J. (2005). A fair and balanced look at the news: what affects memory for controversial arguments? J. Mem. Lang. 53, 95–109. doi: 10.1016/j.jml.2005.02.001

Wiley, J., and Voss, J. F. (1999). Constructing arguments from multiple sources: tasks that promote understanding and not just memory for text. J. Educ. Psychol. 91, 301–311. doi: 10.1037/0022-0663.91.2.301

Keywords: comprehension, multiple documents, functional reading, task-oriented reading, on-line reading

Citation: Cerdán R and Marín MC (2019) The Role of General and Selective Task Instructions on Students’ Processing of Multiple Conflicting Documents. Front. Psychol . 10:1958. doi: 10.3389/fpsyg.2019.01958

Received: 17 January 2019; Accepted: 08 August 2019; Published: 03 September 2019.

Reviewed by:

Copyright © 2019 Cerdán and Marín. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Raquel Cerdán, [email protected]

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.84(8); 2020 Aug

The Psychology of Following Instructions and Its Implications

Sabrina dunham.

a University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Chapel Hill, North Carolina

Adam M. Persky

b Associate Editor, American Journal of Pharmaceutical Education , Arlington, Virginia

The ability to follow instructions is an important aspect of everyday life. Depending on the setting and context, following instructions results in outcomes that have various degrees of impact. In a clinical setting, following instructions may affect life or death. Within the context of the academic setting, following instructions or failure to do so can impede general learning and development of desired proficiencies. Intuitively, one might think that following instructions requires simply reading instructional text or paying close attention to verbal directions and performing the intended action afterward. This commentary provides a brief overview of the cognitive architecture required for following instructions and will explore social behaviors and mode of instruction as factors further impacting this ability.


Following instructions is an important ability to practice in everyday life. Within an academic setting, following instructions can influence grades, learning subject matter, and correctly executing skills. In this commentary, we provide an overview of the primary factors that influence the ability of an individual to follow instructions. We translate these findings from the psychological literature into practical guidelines to follow in the educational setting.

Literature on following instructions first surfaced in the late 1970s. 1 Researchers observed a subset of housewives who demonstrated a preference to tinker with a new home appliance to get it started or watch a demonstration video on how to set it up rather than read the accompanying instruction manual. Since then, numerous factors that influence following instructions have been investigated including a person’s working memory capacity, 2-6 societal rules, 7-9 history effects, 7 self-regulatory behavior, 10,11 and instruction format. 3,6 Although not completely independent of each other, these factors warrant some individual attention to better understand their implications on following instructions.

Working Memory and Following Instructions

Working memory is the brain’s workbench, linking perception, attention, and long-term memory. 12,13 As an example, in the classroom setting, learners may receive information visually from slides and/or auditorily from instructor narration. However, only items that learners pay attention to within the environment enter their working memory. These items are then processed, resulting in the formation of a mental representation (ie, encoding) that effectively moves from working memory to long-term storage. Thus, working memory performance is an important intermediary between perception and learning. Because working memory capacity is limited, 14 a person’s ability to follow instructions may be impacted if the instructional load is greater than that capacity, ultimately leading to information loss (for more information, explore cognitive load theory 15,16 ). This loss of information may be more pronounced when a task must be performed immediately and the presentation rate of instructions cannot be controlled by the user. Imagine a student named Dennis. During class, Dennis is nervous about an upcoming examination and this emotional state preoccupies his working memory, leading to, in that moment, a lower working memory performance. As a result, when the professor gives verbal instructions for an upcoming assignment, the amount of instructional load supersedes Dennis’ capacity to hold on to those instructions in his working memory. Because he cannot hold on to those instructions, he is less likely to store them in his long-term memory and will not be able to refer to them later when completing the task. To summarize, the ability to hold instructions within working memory is necessary to execute the desired function; thus low working memory performance can compromise a student’s ability to follow instructions. 2 If a student cannot process or hold instructions in working memory, they will probably fail to complete a given task correctly.

There are two potential strategies to assist the learner in this situation. One strategy is to have the learner immediately act on the received information. 17 A common example of this is the teach-back method, which is a practice of enactment. The practice of enactment has demonstrated greater retention of new information. 4,5 This line of research has shown that the accuracy of recalling instructions was increased when immediately after instruction, actions were performed at both the initial learning phase (ie, encoding) and later during recall. The second strategy is to use different forms of instructions (eg, written and verbal), which allows the learner to control the rate of presentation. If the learner can control the rate, they can review the instructions as needed or go at a slower pace to fully encode the instructions. 18,19

Societal Rules and History Effects

Following instructions is a behavior, and most human behavior depends on social context. Part of the social context is the presence of another individual. The mere presence effect is the phenomenon that human behavior changes when another human is around. 9 The presence of another person can make an individual more pliant. Being more or less pliant, or pliance , describes behavior that is controlled by a socially mediated consequence. As an example, if an instructor tells a student to write their name at the top of a test sheet and the student does so to gain the instructor’s approval, this is a ply and the student is being pliant. Imagine a student, Angela. She follows instructions because she feels it is a professional expectation that her mentors and peers have. Angela will be pliant because following instructions has a social consequence. For this to occur, however, the instructor must monitor the completion of the task, possess the ability to impose a consequence, and observe the effect of the consequence on the student. Donadeli and colleagues 8 explored the effect of the magnitude of nonverbal consequences, monitoring, and social consequences on instruction following. They observed that the presence of an observer and social reprimand for not following instructions improved the rates at which people followed instructions. This suggests that societal constructs, such as following authority figures and the fear of reprimand, may be drivers in motivating people to follow directions.

There are two possible ways to address societal effects on following instructions. The first way is to establish an expectation of professionalism by explaining why instruction following is important. This would be consistent with aspects of social identity theory. 20,21 The second way is to create the fear of reprimand. In this case, faculty members hold students responsible for following the rules. This could be with respect to assignment formatting, assignment deadlines, or other aspects that might be tied to a penalty.

Following instructions is affected by the presence of another person even if there is no history of reinforcement for such behavior, suggesting that instructional control may be strengthened by social contingencies. 7,8 However, societal rules can lead to history effects. If students never receive feedback on or consequences for their inability to follow instructions, history effects dictate they will continue that behavior. Now imagine a student named Amber. Amber wrote down the instructions during class but did not follow them because she generally does well on her assignments despite not completely following the instructions. As such, she abstained from following the rules because a consequence was not associated with not following them.

Metacognition and Self-regulation

Following instructions also depends on self-regulation, ie, a person’s awareness of their own behavior to act in a manner that optimizes their best long-term interests. 22 To do so, an individual must be aware of their own thoughts and actions. This awareness plays a role in metacognitive monitoring, or a person’s monitoring of their own thoughts and behaviors.

Metacognition has been described as thinking about thinking. 23 At its core, it is about planning, monitoring making progress, and evaluating the completion of a process. For instance, if a student is asked to conduct a journal club meeting, the planning stage would involve gaining an understanding of what is required to successfully complete a journal club meeting. This could include time needed for completion or where to look for an article. The monitoring phase is the awareness to review the article and pull out key information. The final part, evaluation, involves checking the work to determine whether goals were met.

Imagine a student named Craig. Craig wrote down instructions for an assignment, but after completing the assignment, he did not review the instructions to ensure he followed them correctly. He failed to monitor his progress. As such, the ability to be metacognitively aware can be a key piece in following instructions. In this case, individuals may not follow instructions because they are poor monitors of their learning. 22,24-28 Students may not adequately plan before tackling their assignment, such as by reading instructions beforehand. Next, they may not monitor their progress during completion of the assignment. And finally, once students think they have completed the task, they may not go back and read the instructions to ensure they have fulfilled all expectations. To help them with this and other aspects of instruction, students may need to use accountability (societal rules) as a primary source of motivation. Without accountability, students may not follow instructions, thus perpetuating poor metacognitive skills, leading to unawareness of what they know, what they do not know, and the process to correct errors. A strategy may be the use of checklists to help students monitor their thoughts during a process (see Tanner 29 and Medina and colleagues 30 for a review of methods to develop metacognition).

Verbal vs Written Instructions

When examining best practices for conveying instructions to learners, the instructor should consider whether instructions are best retained and applied if received in a verbal versus a written format. To date, no published studies have examined whether one format offers greater benefit over another; however, one study explored both formats in relation to working memory. 6

Written instructions are efficient because large amounts of detail can be provided that students can read rapidly. Thus, step-by-step manuals can be found for almost all electronic devices. While there is a large body of literature describing the mechanics of how we read, there are some important points to underscore. 6 When reading and following instructions, a person will act in the same sequence in which action items are presented in the text. In the television show MASH (season 1, episode 20), one of the characters was instructed to “…cut wires leading to the clockwork fuse at the head, but first remove the fuse.” He proceeded to cut the wires before removing the fuses. He acted in the same sequence in which the instructions were presented but failed to follow the actual instructions. This raises an important point: individuals are more likely to remember instructions when the order is consistent with how events occur. 6 Writing instructions according to the sequence of actions the reader needs to take may lead to better results. For example, “do A before doing B’ is a superior form of wording instructions than stating, “before doing A, do B,” as illustrated in the above scenario.

Spoken instructions are advantageous in face-to-face interactions (eg, within the classroom). Spoken instructions are processed through the phonological loop, a component of working memory focused on verbal information, which is more flexible and convenient. Intrinsically, listening requires less effort than reading. Spoken words can also be paired with visual aids to guide action, such as in measuring blood pressure or administering an immunization. 6 Remarkably, individuals cannot read and follow visual objects at the same time. Combining text with pictures can be more taxing to working memory than combining spoken words and visuals. A drawback associated with spoken words is the rate of presentation. While the speed at which text is read can be controlled by the end user, the instructor’s speed of speech cannot. The phonological loop mediates the ability to hold and process auditory information. 13,31 Items (bits of information) in the phonological store can rapidly decay, and because items are usually chained in such a way that an item primes the next item, 31 one lost step can lead to the loss of all subsequent steps (eg, if a student cannot remember step 3, she is unlikely to recall any step after that). To prevent this loss, people tend to “rehearse” following instructions by repeating the instructions to themselves. 3 Access to both written instructions and verbal instructions may prove beneficial, as written instructions can be referred to if any verbal instruction is missed.

Several factors can impact a student’s ability to follow instructions. Recommendations to increase the probability of learners following instructions are available within the literature ( Table 1 ). While these modalities may not guarantee success, these recommendations should increase the probability that most students will follow instructions. Although we cannot extrapolate from current literature whether one mode of instruction delivery is preferred over another, we can apply some of these findings to pharmacy students in a learning environment where instructions are used to guide the completion of deliverables. The first thing the instructor can do is provide both written and verbal instructions. These instructions should be concise, written in student-friendly language, and given in order of operation (ie, step A then step B). Students can read (and reread the written instructions), which should minimize errors resulting from not paying attention or insufficient working memory. Although distracted when verbal instructions were given, a student can review written instructions in a self-paced manner, thus reducing cognitive load and increasing the probability of remembering them. The instructor could then employ metacognitive monitoring and assess the student’s understanding of the instructions by including a checklist within the assignment, ie, a strategy to help Craig monitor his learning and check his work (much like journals have checklists for authors). Finally, the instructor should penalize students for not following the instructions thereby using the social context to reinforce their need to follow instructions. Amber benefits by learning there are consequences for not following instructions. For Dennis and Craig, the threat of punishment in the form of lost points may motivate them to review the instructions to ensure they have done their work correctly, a process which can improve their attention (Dennis) and metacognitive monitoring (Craig).

Common Errors in Following Instructions and Recommendations to Enhance the Probability of Instruction Following

An external file that holds a picture, illustration, etc.
Object name is ajpe7779-t1.jpg

How to add instructions to your experiment/task



Step 1: plan, step 2: decide, step 3: draw, step 4: export, step 5: upload on psytoolkit, step 6: add code, method 1: pager, method 2: message, the message with text., just show a text in blocks.

Before your participants can do a task, you need to give them instructions. Typically, you would this as follows:

First think about what it that participants need to understand. Take some paper, and write on each page what you want participants to read about the task.

Take the following two tips into account:

You can do instructions purely as text or with images. As text is quicker and easier, but with images, the instructions often look better.

If you want to just texts, goto Text only instructions . first.

Now you need to use drawing software to make you images. You can choose quite different software, here are some examples with difficulty level of learning.

Easy: LibreOffice Draw

Medium-Advanced: Inkscape

Whatever your software you used for making your instruction drawing(s), you need to make sure to save your drawing in one of the common bitmap formats, such as PNG, JPG, or BMP.

Now upload the new drawing files to your PsyToolkit account. Go to your experiment and click "Upload image or sound files" under text box where you have your experiment code. You can select one or more files.

Add images in PsyToolkit

In your bitmaps section, make sure you have added your instruction stimuli.

Imagine you have two images, called instruction1.png and instruction2.png . If that is the case, and if you have uploaded those (previous step), then you would add this to your code first:

Now PsyToolkit knows which images are needed in your code.

Step 7: Add more code

Now add the instructions to your block. There are different ways to do it, choose what is easiest

The pager method is nice when you have relatively complex instructions through which participants can go backward and forward. People can use the up and down keys on the keyboard to go forward and backward, use the space bar to go forward. They need to use the q button to "quit". This way, they cannot unwillingly easily exit the instructions (it takes extra effort to press q).

The message method is nice when you have relatively simple instructions which only need to be read once.

Text only instructions in blocks

It is possible to just use texts. There are the following examples in blocks.

The message with text shows one line of text and then waits for a key press. You can say which key or space by default.

You can show texts at different screen X/Y positions. This does show the text and does not wait for a key press.

  • Original Article
  • Published: 30 January 2020

Power of instructions for task implementation: superiority of explicitly instructed over inferred rules

  • Maayan Pereg   ORCID: orcid.org/0000-0003-2366-4953 1 &
  • Nachshon Meiran 1  

Psychological Research volume  85 ,  pages 1047–1065 ( 2021 ) Cite this article

280 Accesses

1 Citations

Metrics details

“Power of instructions” originally referred to automatic response activation associated with instructed rules, but previous examination of the power of instructed rules in actual task implementation has been limited. Typical tasks involve both explicit aspects (e.g., instructed stimulus–response mapping rules) and implied, yet easily inferred aspects (e.g., be ready, attend to error beeps) and it is unknown if inferred aspects also become readily executable like their explicitly instructed counterparts. In each mini-block of our paradigm we introduced a novel two-choice task. In the instructions phase, one stimulus was explicitly mapped to a response; whereas the other stimulus’ response mapping had to be inferred. Results show that, in most cases, explicitly instructed rules were implemented more efficiently than inferred rules, but this advantage was observed only in the first trial following instructions (though not in the first implementation of the rules), which suggests that the entire task set was implemented in the first trial. Theoretical implications are discussed.

This is a preview of subscription content, access via your institution .

Access options

Buy single article.

Instant access to the full article PDF.

Price excludes VAT (USA) Tax calculation will be finalised during checkout.

Rent this article via DeepDyve.

psychology task instruction

In the following paper, we adopt the term ‘RITL’, but do so without commitment to its specific instantiation in Cole’s papers, but instead, more broadly, as a class of paradigms, all which examine instructions-based performance.

Originally, the NEXT phase was introduced to measure automatic effects of instructions (Meiran et al., 2017 ), which was not the focus of the current work (though it is involved in Experiments 1 and 2, but not in Experiment 3). This additional process (responding to a NEXT target) might raise the question of whether RITL in its original meaning is measured in this task. We argue that it is. Specifically, perhaps early works concerning RITL focused merely on direct and immediate instructions (e.g., Cole, Bagic, Kass, & Schneider, 2010 ), but recent developments in this literature (Cole et al., 2017 ) consider additional forms of RITL tasks, including those involving delayed implementation. In addition, instructions in real life are often performed with some delay and not immediately (reconsider the driving directions example). In our view, the broad definition of RITL (“the ability to rapidly perform novel instructed procedures”, Cole et al., 2017 , p.2) qualifies the GO phase of the NEXT paradigm as measuring RITL (see also Cole et al., 2013 , Fig.  1 b for a non-verbal RITL example that closely resembles our NEXT instructions). More specifically, Cole et al. ( 2013 ) defined different forms of RITL, of which the NEXT paradigm should be considered as measuring concrete and simple non-verbal RITL; while a later elaboration of this definition suggests that S–R mappings in the NEXT paradigm could be completely proactively reconfigured (Cole et al., 2017 ), relative to Cole et al.’s ( 2013 ) RITL paradigm. Another consideration for including a NEXT phase relates to yet unpublished experiments showing that performance in the GO phase was approaching ceiling when the NEXT phase was omitted (i.e., participants reached an excellent level of performance shortly after the onset of the experiment), suggesting that delaying implementation might prove as crucial to study RITL in this task if one wishes to avoid near-ceiling levels of performance . Nonetheless, in this study we directly measure the importance of the delaying NEXT phase in influencing the efficiency of explicitly instructed/inferred rules (comparing Experiment 2 with Experiment 3).

Another important comment is that, in contrast to null-hypothesis testing, Bayesian inference generally avoids alpha inflation toward accepting H1 in exploratory analyses, since there is no bias in favor of H1, as both H1 and H0 can be accepted (Rouder et al., 2009 ).

NEXT errors refer to trials in which participants erroneously executed the GO instructions during the NEXT phase (e.g., pressing “left” instead of the spacebar). We did not remove mini-blocks involving a NEXT error since it does not reflect a problem with rule encoding, but rather the opposite (i.e., a reflexive activation of the instructions in an inappropriate context).

We thank anonymous Reviewer 1 for suggesting this analysis.

It should be noted that the RT interaction between Inference Difficulty and Rule-Type was robust when using a different RT cutoff of 3.5 sd per condition and participant, suggesting that perhaps the effect is increased in the high percentiles of the distribution.

We thank Bernard Hommel for raising this important issue and inspiring Experiments 2 and 3.

Direct and indirect instructions refer to instructed and inferred rules, respectively.

Allon, A., & Luria, R. (2016). Prepdat-an R package for preparing experimental data for statistical analysis. Journal of Open Research Software . https://doi.org/10.5334/jors.134 .

Article   Google Scholar  

Brass, M., Liefooghe, B., Braem, S., & De Houwer, J. (2017). Following new task instructions: Evidence for a dissociation between knowing and doing. Neuroscience & Biobehavioral Reviews, 81 , 16–28. https://doi.org/10.1016/j.neubiorev.2017.02.012 .

Cohen-Kdoshay, O., & Meiran, N. (2007). The representation of instructions in working memory leads to autonomous response activation: Evidence from the first trials in the flanker paradigm. The Quarterly Journal of Experimental Psychology, 60 (8), 1140–1154.

PubMed   Google Scholar  

Cohen-Kdoshay, O., & Meiran, N. (2009). The representation of instructions operates like a prepared reflex. Experimental Psychology, 56 (2), 128–133. https://doi.org/10.1027/1618-3169.56.2.128 .

Article   PubMed   Google Scholar  

Cole, M. W., Bagic, A., Kass, R., & Schneider, W. (2010). Prefrontal dynamics underlying rapid instructed task learning reverse with practice. Journal of Neuroscience, 30 (42), 14245–14254. https://doi.org/10.1523/JNEUROSCI.1662-10.2010 .

Cole, M. W., Laurent, P., & Stocco, A. (2013). Rapid instructed task learning: A new window into the human brain’s unique capacity for flexible cognitive control. Cognitive, Affective, & Behavioral Neuroscience, 13 (1), 1–22. https://doi.org/10.3758/s13415-012-0125-7 .

Cole, M. W., Braver, T. S., & Meiran, N. (2017). The task novelty paradox: Flexible control of inflexible neural pathways during rapid instructed task learning. Neuroscience & Biobehavioral Reviews, 81 , 4–15. https://doi.org/10.1016/j.neubiorev.2017.02.009 .

De Houwer, J., Hughes, S., & Brass, M. (2017). Toward a unified framework for research on instructions and other messages: An introduction to the special issue on the power of instructions. Neuroscience & Biobehavioral Reviews, 81 , 1–3. https://doi.org/10.1016/j.neubiorev.2017.04.020 .

Dreisbach, G., & Haider, H. (2008). That’s what task sets are for: Shielding against irrelevant information. Psychological Research Psychologische Forschung, 72 (4), 355–361. https://doi.org/10.1007/s00426-007-0131-5 .

Duncan, J., Parr, A., Woolgar, A., Thompson, R., Bright, P., Cox, S., et al. (2008). Goal neglect and Spearman’s g: Competing parts of a complex task. Journal of Experimental Psychology: General, 137 (1), 131–148. https://doi.org/10.1037/0096-3445.137.1.131 .

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41 (4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149 .

Hommel, B. (1998). Event files: Evidence for automatic integration of stimulus-response episodes. Visual Cognition, 5 (1/2), 183–216. https://doi.org/10.1080/713756773 .

Hommel, B., & Colzato, L. S. (2004). Visual attention and the temporal dynamics of feature integration. Visual Cognition, 11 (4), 483–521. https://doi.org/10.1080/13506280344000400 .

Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior, 17 , 649–668.

Jeffreys, H. (1961). Theory of probability (3rd ed.). Oxford: Oxford University Press.

Google Scholar  

Katzir, M., Ori, B., & Meiran, N. (2018). “Optimal suppression” as a solution to the paradoxical cost of multitasking: Examination of suppression specificity in task switching. Psychological Research Psychologische Forschung, 82 (1), 24–39. https://doi.org/10.1007/s00426-017-0930-2 .

Koechlin, E., Basso, G., Pietrini, P., Panzer, S., & Grafman, J. (1999). The role of the anterior prefrontal cortex in human cognition. Nature, 399 (6732), 148–151.

Kühn, S., Keizer, A. W., Colzato, L. S., Rombouts, S. A. R. B., & Hommel, B. (2011). The neural underpinnings of event-file management: Evidence for stimulus-induced activation of and competition among stimulus–response bindings. Journal of Cognitive Neuroscience, 23 (4), 896–904.

Liefooghe, B., Wenke, D., & De Houwer, J. (2012). Instruction-based task-rule congruency effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38 (5), 1325–1335. https://doi.org/10.1037/a0028148 .

Liefooghe, B., De Houwer, J., & Wenke, D. (2013). Instruction-based response activation depends on task preparation. Psychonomic Bulletin & Review, 20 (3), 481–487. https://doi.org/10.3758/s13423-013-0374-7 .

Meiran, N. (1996). Reconfiguration of processing mode prior to task performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22 (6), 1423–1442. https://doi.org/10.1037/0278-7393.22.6.1423 .

Meiran, N., Pereg, M., Kessler, Y., Cole, M. W., & Braver, T. S. (2015). The power of instructions: Proactive configuration of stimulus–response translation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41 (3), 768–786. https://doi.org/10.1037/xlm0000063 .

Meiran, N., Pereg, M., Givon, E., Danieli, G., & Shahar, N. (2016). The role of working memory in rapid instructed task learning and intention-based reflexivity: An individual differences examination. Neuropsychologia, 90 , 180–189. https://doi.org/10.1016/j.neuropsychologia.2016.06.037 .

Meiran, N., Liefooghe, B., & De Houwer, J. (2017). Powerful instructions: Automaticity without practice. Current Directions in Psychological Science, 26 (6), 509–514.

Pereg, M., & Meiran, N. (2018). Evidence for instructions-based updating of task-set representations: The informed fadeout effect. Psychological Research Psychologische Forschung, 82 (3), 549–569. https://doi.org/10.1007/s00426-017-0842-1 .

Pereg, M., Shahar, N., & Meiran, N. (2019). Can we learn to learn? The influence of procedural working-memory training on rapid instructed-task-learning. Psychological Research Psychologische Forschung, 83 (1), 132–146.

Rogers, R. D., & Monsell, S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124 (2), 207–231.

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16 (2), 225–237.

Ruge, H., & Wolfensteller, U. (2010). Rapid formation of pragmatic rule representations in the human brain during instruction-based learning. Cerebral Cortex, 20 (7), 1656–1667. https://doi.org/10.1093/cercor/bhp228 .

Schönbrodt, F. D., & Wagenmakers, E.-J. (2018). Bayes factor design analysis: Planning for compelling evidence. Psychonomic Bulletin & Review, 25 (1), 128–142. https://doi.org/10.3758/s13423-017-1230-y .

Shahar, N., Pereg, M., Teodorescu, A. R., Moran, R., Karmon-Presser, A., & Meiran, N. (2018). Formation of abstract task representations: Exploring dosage and mechanisms of working memory training effects. Cognition, 181 , 151–159. https://doi.org/10.1016/j.cognition.2018.08.007 .

JASP Team. (2017). JASP (Version [Computer software]. https://jasp-stats.org/

Thomas, J. (1995). Meaning in interaction: An introduction to pragmatics . Abingdon: Routledge.

Treisman, A. (1996). The binding problem. Current Opinion in Neurobiology, 6 (2), 171–178. https://doi.org/10.1016/S0959-4388(96)80070-5 .

van Dam, W. O., & Hommel, B. (2010). How object-specific are object files? Evidence for integration by location. Journal of Experimental Psychology: Human Perception and Performance, 36 (5), 1184–1192. https://doi.org/10.1037/a0019955 .

Verbruggen, F., McLaren, R., Pereg, M., & Meiran, N. (2018). Structure and implementation of novel task rules: A cross-sectional developmental study. Psychological Science, 29 (7), 1113–1125. https://doi.org/10.1177/0956797618755322 .

Article   PubMed   PubMed Central   Google Scholar  

Vogel, S., & Schwabe, L. (2018). Tell me what to do: Stress facilitates stimulus-response learning by instruction. Neurobiology of Learning and Memory, 151 , 43–52. https://doi.org/10.1016/j.nlm.2018.03.022 .

Download references


This work was supported by a research grant from the US–Israel Binational Science Foundation Grant #2,015,186 To Nachshon Meiran, Todd S. Braver and Michael W. Cole.

Author information

Authors and affiliations.

Department of Psychology and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel

Maayan Pereg & Nachshon Meiran

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Maayan Pereg .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Ethical approval

The study was approved by the departmental ethics committee. Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 95 kb)

This appendix involves a few additional tests that were raised post hoc to try and clear the issues of (1) encoding during the instructions phase; and (2) learning throughout the experiment, given that the same inference is repeatedly needed. Importantly, we note that these analyses were raised in the review process and were not hypothesized in advance (this is especially important for the pre-registered Experiments 2 and 3) (Fig. 

figure 6

Study times in ms (including the 2 s delay) as a function of Rule-Type and Inference Difficulty in Experiments 1–3. Error bars represent Bayesian 95% credible intervals

Examining study times.

As has been mentioned, in the instructions phase each of the screens had a 2 s delay, after which participants could advance to the following screen via a spacebar self-paced response, thus introducing some inaccuracy in the measurement of study times.

To test whether participants used the instructions phase to differentially encode the instructed/inferred rules, under the assumption that longer study times imply greater encoding efforts, we performed two sets of Bayesian ANOVAs in each of the three experiments. The first set of ANOVAs was performed for total study times (i.e., the sum of both instructions screens compared to one instruction screen including both instructed rules). This analysis was aimed as a first exploration, closer to our original hypotheses (that difficult inference should take longer than easy inference, and longer than when both rules were explicitly instructed). Thus, the independent variable was Block Type (3 levels: both rules explicitly instructed; one rule explicitly instructed and one inferred, with difficult vs. easy inference).

In Experiment 1 , The results supported a main effect for Block Type [ F (2,78) = 9.56, p  < 0.001, η 2 p  = 0.20, BF 10  = 123.39], showing support for our hypothesis (study times were 3.52 s for two explicit rules, 3.79 s for the easy inference condition, and 3.95 s for the difficult inference condition (these times include the 2 s delay). The same pattern was observed in Experiment 2 [ F (2,78) = 6.79, p  < 0.01, η 2 p= 0.15, BF 10  = 16.21], although the results indicated a smaller difference between the condition where both rules were explicitly instructed and the easy inference condition (5.77, 5.80, and 6.15 s in the both explicitly instructed, easy inference and difficult inference, respectively). Finally, Experiment 3 showed the same pattern [ F (2,78) = 7.81, p  < 0.001, η 2 p  = 0.17, BF 10  = 34.48], with results that more clearly demonstrate the difference between the three conditions (4.63, 4.82, and 5.08 s for the both explicitly instructed, easy inference and difficult inference, respectively).

The second set of ANOVAs only involved the “one explicitly instructed S–R mapping and one inferred S–R mapping” condition. This analysis was performed with the within-subjects independent variables Rule-Type (instructed-inferred), Inference Difficulty (difficult-easy; in this analysis, this variable codes for whether the explicit instruction appeared first (easy inference) or second (difficult inference)). This analysis was meant to hopefully better differentiate the encoding processes within this condition.

Unlike in the previous analysis, these results point to differences between the experiments. In Experiment 1 , the results indicated a robust interaction between Rule-Type and Inference Difficulty [ F (1,39) = 11.05, p  < 0.01, η 2 p  = 0.22, BF 10  = 3,337.53], whereas both main effects were not significant (ps > 0.06, BF 10  < 0.38). The results demonstrate that participants took longer to study whichever stimulus appeared first, regardless of whether it was explicitly instructed or inferred ( Figure 6 ): Both simple effects for Rule-Type were robust (BF 10  = 3.41 for difficult inference, and BF 10  = 40.56 for easy inference.

A similar pattern was observed in Experiment 2 [ F (1,39) = 5.80, p  = 0.2, η 2 p  = 0.13, BF 10  = 68.76 for the interaction]. In this experiment, the simple effect of Rule-Type was only robust for easy inference mini-blocks (BF 10  = 0.66 for difficult inference, and BF 10  = 20.84 for easy inference) suggesting that when the inference was easy (i.e., the instructed rule presented first), participants took less time to study the inferred rule, but when the inference was difficult study times did not robustly differed between rule types.

In Experiment 3 however, the results did not indicate a robust interaction (or any other main effects, BFs < 2.60) [for the interaction: F (1,39) = 0.10, p  = 0.76, η 2 p  =  < 0.01, BF 10  = 0.25]. Here, the descriptive results indicate that participants learned explicitly instructed rules somewhat longer than inferred ones (main effect: F (1,39) = 4.25, p  = 0.046, η 2 p  = 0.10, BF 10  = 2.53; which is considered an indecisive effect). We note that this experiment did not involve a NEXT phase, and while we do not completely understand this effect, which was not the focus of the current study, it could be that this (lack of) effect partly reflects the lower task demands in this experiment.

Learning to infer during the experiment

The next set of analyses aims to test whether the superiority of instructions changes with the progress of the experiment, given that participants are required to repeat the same general inference. To perform this analysis, we added a Progress variable that divides mini-blocks into three equal parts with 50 mini-blocks each. To focus on the main result, we tested the first-trial RT instructions superiority effect above and beyond other variables. In Experiment 1 , the results showed a main effect for Progress [ F (2,78)=85.78, p  < 0.001, η 2 p =0.49, BF 10 =1.44e +29 ] demonstrating RT acceleration throughout the experiment. In addition, the results suggest a robust interaction between Rule-Type and Progress F (2,78)=9.33, p <0.001, η 2 p  = 0.19, BF 10 =6.83], showing that the superiority effect decreased with progression in the experiment (47.4 ms in the first 50 mini-blocks, 19.3 ms in the next 50 mini-blocks, and 11.5 ms in the final 50 mini-blocks).

In Experiments 2 and 3, although the Progress main effect was robust [ F (2,78)=38.14, p <0.001, η 2 p =0.49, BF 10 =3.58 e+14 (Exp. 2); F (2,78)=43.60 p <0.001, η 2 p =0.53, BF 10 =2.04 e+16 (Exp. 3)], the interaction between Rule-Type and Progress was not significant [ F (2,78)=2.52, p =0.09, η 2 p =0.06, BF 10 =0.31 (Exp. 2); F (2,78)=0.31, p =0.73, η 2 p <.01, BF 10 =0.09 (Exp. 3)] and allowed accepting H0.

Therefore, the results do not support the hypothesis that participants have learned how to cope with required inference in the course of the experiment capitalizing on the constant abstract structure of the instructions. Specifically, while a supporting pattern was observed in some experiments, it was not robust across experiments.

Rights and permissions

Reprints and Permissions

About this article

Cite this article.

Pereg, M., Meiran, N. Power of instructions for task implementation: superiority of explicitly instructed over inferred rules. Psychological Research 85 , 1047–1065 (2021). https://doi.org/10.1007/s00426-020-01293-5

Download citation

Received : 19 June 2019

Accepted : 14 January 2020

Published : 30 January 2020

Issue Date : April 2021

DOI : https://doi.org/10.1007/s00426-020-01293-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Instructions-based performance
  • Inferred rules
  • Explicit instructions


  • Find a journal
  • Publish with us


  1. Psychology Research Methods: Task cards by Resources Galore

    psychology task instruction

  2. Psychology Research Methods: Task cards by Resources Galore

    psychology task instruction

  3. 1.3 Assessment Task Instruction Sheet Task 3

    psychology task instruction

  4. Psychology Pre-Course Task

    psychology task instruction

  5. Psychology Task

    psychology task instruction

  6. Psychologytools.org

    psychology task instruction


  1. Task instruction release

  2. Music and Cognitive Task Performanc

  3. Department of Psychology Story Telling Task About SNS Vision and Mission

  4. task of psychology (10 positif of character i have )

  5. The Psychology of Task Completion: Understanding Your Work Habits

  6. "The Psychology of Procrastination: Avoiding Negative Emotions"


  1. What Are Some Examples of Psychological Variables?

    Psychological variables refer to elements in psychological experiments that can be changed, such as available information or the time taken to perform a given task. Variables can be classified as either dependent or independent.

  2. What Are Some Examples of Psychological Barriers?

    Psychological barriers are internal beliefs that cause a person to feel he cannot complete a task. For example, someone trying to find a job may feel unqualified to do a particular job, or someone engaged in an unhealthy habit like smoking ...

  3. What Is the Importance of Following Instructions?

    Following instructions can simplify tasks, increase effectiveness, eliminate confusion, and save time. Not to mention, it makes for a safer building process. But instruction-following also has some added benefits.

  4. The Role of General and Selective Task Instructions on Students

    ... task instruction). This condition would provide ... “Avoiding and overcoming misinformation on the internet” in Critical thinking in psychology.

  5. Using Instructions in Procedural Tasks

    Learning procedures and goal specificity in learning and problem-solving tasks. European Journal of Cognitive Psychology, 14(1), 105 –. 126. Hart, S. G. &

  6. (PDF) There Are Limits to the Effects of Task Instructions

    ... task. Journal of Experimental Psychology: Learning, Memory, and. Cognition, 27(4), 948-957. Oberauer, K. (2009). Design for a working memory

  7. The Psychology of Following Instructions and Its Implications

    If a student cannot process or hold instructions in working memory, they will probably fail to complete a given task correctly. There are two potential

  8. How to add instructions to your experiment/task

    Introduction. Understanding this lesson requires a minimal understanding of PsyToolkit experiment scripts. You need to understand that there are

  9. Cognitive Psychological Approaches to Instructional Task Analysis

    Chapter 5: Cognitive Psychological Approaches to Instructional Task Analysis · REFERENCES · Cite article · Share options · Information, rights and permissions.

  10. (PDF) The effect of task set instruction on detection response task

    task set instructed groups. task instruction and detection response task 5. Table 2

  11. Psychological Perspectives (Chapter 5)

    The psychological dimension of task-based language teaching (TBLT) thus defined includes the learner characteristics that are, in Snow's

  12. Power of instructions for task implementation: superiority of explicitly

    Instruction-based task-rule congruency effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5), 1325–1335. https

  13. Components of a Psychology of Instruction: Toward a Science ...

    interested in programmed learning and teaching machines con- tinued to work in this mode. As the field became popular, however, it took on a superficial

  14. The Psychology of Following Instructions and Its Implications

    ... Psychology. Memory & cognition. 2015. TLDR. An experiment to examine the impacts of enactment on both the encoding and recall phases of a task measuring memory