Next Article in Journal
Model Proposal for Service Quality Assessment of Higher Education: Evidence from a Developing Country
Next Article in Special Issue
Developing Second Language Learners’ Sociolinguistic Competence: How Teachers’ CEFR-Related Professional Learning Aligns with Learner-Identified Needs
Previous Article in Journal
Developing Critical Perspectives among EFL Learners: Insights from Language Educators
Previous Article in Special Issue
Perspectives on the Effectiveness of Madrid’s Regional Bilingual Programme: Exploring the Correlation between English Proficiency Level and Pre-Service Teachers’ Beliefs
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

An Analysis of the Type of Questions Posed by Teachers in English-Medium Instruction at University Level

Department of English and German Philology, University of the Basque Country UPV/EHU, 01004 Vitoria-Gasteiz, Spain
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(1), 82;
Original submission received: 2 December 2022 / Revised: 9 January 2023 / Accepted: 10 January 2023 / Published: 12 January 2023


Teacher-led questions not only guide meaning-making interactions but they also scaffold students’ learning, and this is especially important in English-medium instruction (EMI). Due to the scant literature on this topic in higher education, this article analyses what type of questions EMI history lecturers pose and whether they are subject to individual differences. The study is based on 12 two-hour lectures whose transcriptions were analysed by three researchers. The results showed that instructional or content question types were much more commonplace than regulative questions (related to classroom procedures). Confirmation check, display and referential questions, which belong in the instructional category, were not posed to fulfil their intended pedagogic goals, a limitation accentuated by students’ trend to provide short responses. These results reveal the need to design teacher training courses aimed at developing teachers’ interactional abilities. Since questioning practices varied considerably between lecturers, customized training sessions should also be considered.

1. Introduction

Teacher-led questions play a paramount role when it comes to boosting students’ comprehension of subject matter, and this irrespective of the teaching language (be it the first language (L1) or a foreign language). Teachers ask questions to guide meaning-making interactions and to scaffold their students’ learning. This practice helps narrow the gap between students’ actual knowledge and the knowledge they are expected to gain in collaboration with the lecturer, who we presume to be more knowledgeable regarding the subject and who should facilitate this process. As Kawalkar and Vijapurkar [1] put it, teacher questions are important because they affect the nature of students’ thinking and reasoning while they determine the quality and the level of students’ participation. That is, they can become indices of quality teaching. In addition, research has documented that high dialogic teacher talk positively predicts academic outcomes [2]. For all these reasons, the role of questioning exchanges becomes “a fruitful area to explore” [3] (p. 816).
Although research abounds on the impact of asking students questions in L1 and second language (L2) EFL learning contexts, little has been carried out regarding its impact in the field of English-medium instruction (EMI) at university, this paper’s context. EMI is ”an educational system where content is taught through English in contexts where English is not used as the primary, first, or official language” [4] (p. 114). Since EMI programmes are mushrooming at universities all over the world [5], this is an issue well worth investigating. Our study aims to help fill the gap and it is particularly innovative for the following reasons: (i) Nearly all previous research has taken place in primary and secondary education [3,6,7,8,9] whereas little attention has been paid to tertiary education, which is our focus; (ii) Most studies have involved small-group student interaction, whereas whole class teacher/student interaction, which is the focus of our study, has been overlooked [6,10]; (iii) The bulk of research has been in mathematics (see [11] for a systematic review of 15 studies; [12]), business administration [13], and science classrooms [1,3,14,15,16], while few papers have dealt with humanities, as is the case of the present study.

2. Questioning Practices in EMI in Higher Education

Previous EMI research has analysed the role of questions within a broader approach to EMI discourse and teaching. Dafouz et al. [17] delved into disciplinary reasoning episodes (DREs) in order to analyse the role played by language-related negotiations in explicit reasoning episodes. The authors concluded that “[t]he benefits of engaging students in question-and-answer formats for the quality of disciplinary reasoning align with findings from L1 science education” (p. 556). Via a questionnaire Suviniitty [18] asked EMI students how teacher questions affected comprehensibility and concluded that lectures with a higher degree of interaction and questions were judged to be easier to understand.
Other studies indicate that students’ lack of English skills constrains their willingness to ask questions. Recent research reveals that university students tend to feel uncomfortable when asked to contribute orally in class, but their qualms are even greater in classes delivered in their L2 [19]. Tsou [20] points out that in some contexts, such as Taiwan, EMI students are not linguistically ready to ask questions, and this forces them to consult the teacher privately during the break. The reason is that they are afraid of asking questions in front of the whole class, which is why they are often allowed to ask questions in their L1. In fact, Tsou observed that “the teachers appeared to be used to the lack of interaction because most of the time they answered their own questions without waiting for a response from a student” [20] (p. 83). In Sweden Airey [14] found that students believed they learnt equally well in Swedish and English, but, after watching video footage of actual lectures, they acknowledged that fewer questions were asked and answered in EMI classes. Airey and Linder [15] also observed that among Swedish students the traditional reluctance to ask questions was exacerbated in EMI classes which they find “all the more worrying when we take into account the fact that lecturers see a strong correlation between asking questions and student understanding” (p. 556; emphasis in the original).
Sánchez-García [13], who undertook research in Spain, explored two business administration teachers’ practices in Spanish-medium and EMI lectures. Sánchez-García distinguished two main question categories: instructional questions (related to the content being learnt) and regulative questions (related to classroom management and organisation). Interestingly the number of questions was largely similar in Spanish (L1) and EMI, although there were almost twice as many questions regarding classroom management in EMI lectures, revealing lecturers’ concern about students’ understanding of lecture organisation in the latter. Conversely, the use of self-answered questions was three times higher in Spanish, which seems to indicate that these types of questions are not yet part of lecturers’ repertoire in English. The author concludes that “teachers may not be asking as many eliciting questions as they often believe they do” (p. 46), which is why there is a need to raise teachers’ awareness of the role of questions in students’ learning process, as questions could be better exploited to support pedagogical objectives.
Finally, Chang [21] investigated whether disciplinary cultures influenced the patterns of questions in fifteen small-class lectures with no more than 40 students. Chang compared three different disciplines, namely Humanities and Arts, Social Science and Education, and Physical Sciences and Engineering. The results showed “far more similarities than differences between the soft and hard fields with regard to the use of questions in academic lectures” (p. 112), which leads the author to conclude that the influence of genre (the lecture as a genre) outweighed that of disciplinary culture.
The traditional lecture format still seems to be the main mode of teaching at undergraduate level [13,22] and it seems that few studies focus on the role that questions play in this format. Our study aims to help fill that gap. We analyse teacher-fronted questioning during whole-class discussions in history classes delivered by four EMI teachers at a Spanish university. We believe there is a need to know how questions dovetail with the interactional exchanges that take place in EMI classes, as well as what the most common types of questions used by EMI teachers are. With this analysis we can propose measures to boost balanced and effective teaching practices.

3. Research Questions

Based on Sánchez-García’s [23] taxonomy of questions (see Table 2 below), this paper addresses the following two research questions:
Are there any general tendencies in the types of questions posed by the EMI history teachers?
Are the questioning tendencies subject to individual differences among the lecturers?

4. The Study

This work is part of a longitudinal research project whose main goal is the study of teacher-student interaction in an EMI context. The study was conducted at the Department of History of the University of the Basque Country in Spain (UPV/EHU). The UPV/EHU is a public university in which classes are taught in Spanish and Basque, the two official languages and, since 2005, in non-official languages, primarily English, under the Multilingualism Program (MP). There are currently more than 750 undergraduate and more than 300 Master’s subjects taught through English.

4.1. The Participants and the Courses

In order to recruit participants for our study, we contacted the seven EMI teachers of the Department of History via email and explained the study’s objective. Five lecturers showed interest in participating, but one of them was excluded because his classes mainly consisted of student interactions which were not appropriate for this paper’s object of study (i.e., teacher-fronted questions). Thus, after the screening, the participants for the study were four male lecturers that will be referred to as T1, T2, T3 and T4. As required by the MP of the UPV/EHU, the four lecturers had an equivalent of the C1 level of the Common European Framework of Reference for Languages (CEFR) in English.
Table 1 contains information on the subjects the lecturers taught, their teaching experience in general and teaching experience in EMI. It also includes information on the twelve lectures analysed (three per lecturer) which were randomly selected from a pool of 29 observed lectures. These lectures, like the rest of the lectures, were primarily teacher-fronted. For ease of reference in the discussion of the data, each lecture has a code number which is provided in the second to last column to the right. The total number of words uttered in the lessons per teacher is given in the last column. This figure refers to the actual words uttered, including repetitions of the same word, but unfinished words were not taken into account. Each of the lectures lasted approximately two hours and the corpus consists of a total of 91,904 words.
The cohorts of students were small, ranging from six to 20 students. The groups were heterogeneous in terms of their level of proficiency in English, as is usually the case in EMI courses irrespective of the country [24,25]. However, the majority of the learners would be at the B2 level of CEFR [26], and homogeneous in terms of cultural background since, with the exception of two international exchange students per course, the rest of the students were local.

4.2. Data Collection and Coding Process

Once the lecturers and the students granted us the necessary permissions, we recorded one lecture every fortnight during the whole semester. We randomly selected three sessions per teacher for analysis. These were then transcribed by a research assistant and were revised by the authors for accuracy. The transcriptions reflect what was said word-for-word in the classes to the best of our and the research assistant’s ability. Ungrammaticalities, inaccuracies and repeated words have not been eliminated or resolved (see the nomenclature for the transcription conventions).
The coding process of the questions was as follows. First, the research assistant was instructed to identify all the teacher-fronted questions, that is to say, those instances in which the utterance’s intonation pattern and/or its syntactic pattern was that of a question, and to categorise them into one of the eleven categories proposed by Sánchez-García [13,23] and presented in Table 2.
Then one of the authors of this study analysed the questions individually in order to verify that they performed the function that had been assigned to them by the research assistant. There were no significant discrepancies between the research assistant’s and the author’s classification of the question categories of referential, repetition, language, confirmation check, retrospective, rhetorical, procedural and off-task. However, they both found it difficult to determine whether some of the questions belonged in the display category (i.e., “those to which the answer is known by the teacher”) and which were answered by the lecturers themselves or the self-answered category (“those which are immediately answered by the teacher, preventing other participants from providing any response”, emphasis placed by the authors of this paper) [13] (p. 32). The questions in extract 1 illustrate the categorisation dilemma we faced:
Extract 1: Lecture 1
  • T1: what is an audiencia in # in Spanish # eh # system? There were audiencias in Spain # there were were one in Valladolid # there were one in Granada.
  • T1: what ## which is an audiencia?
  • T1: in Spain # it was # and it is # a xxx tribunal # a high court.
In extract 1, T1 asked what audiencia (hearing) was in the past twice and then he provided the answer. As transcribed, the two questions could be classified as display since the lecturer knows the answer. However, alternatively, they could also be referred to as self-answered since the lecturer himself provided the answer.
In order to determine which question type captured what occurred in this and other similar instances, we concluded that it was necessary to know whether the lecturers gave the students a chance to reply; that is to say, whether they gave the students enough wait time, and to include this information in the transcripts. Wait time is the time the teacher waits for the student or students to provide an answer. Following Tobin [27] and Mujis and Reynolds [28], we decided on three seconds or more to be the ideal wait time in which the students can provide their answer. If the lecturer answered his own question immediately after he posed it without allowing any wait time, the question was categorised as self-answered. On the other hand, if the lecturer provided the answer to the question himself after the wait time was over, the question was classified as display. In the case of the questions in extract 1, we noted that T1 did not give the students any wait time, therefore these questions were classified as self-answered. Some other minor discrepancies between the research assistant’s and one of the author’s classifications of rhetorical and indirect questions occurred and were discussed with the second author until inter-coder agreement was reached. The patterns and tendencies are presented next.

5. Results

5.1. First Research Question: Are There Any General Tendencies in the Types of Questions Posed by the EMI History Teachers?

Table 3 presents the distribution patterns of all the teacher-fronted questions found in our data from the most frequent to the least frequent. The data have been normalised to 1000 words. That is to say, the second column of Table 3 refers to the number of questions per 1000 words. The total number of questions for each category is also provided.
Table 3 reveals that the questions posed by the lecturers fall into three categories: those which are by far the most common, namely, confirmation check questions with an average of 20.44 per 1000 words, a second group of questions with considerably fewer tokens (e.g., display (4.11‰), referential (2.57‰) and self-answered questions (1.33‰)), and finally, a third group of questions with an average lower than 0.44 occurrences per 1000 words, such as, repetition, retrospective, procedural, off-task, indirect, rhetorical and language questions. Next, we turn to the individual analysis from the most frequently asked type of questions to the rarest ones.
The high number of confirmation check questions in our data in comparison with other question categories means that the main goal of the lecturers’ interactions with their students was to check comprehension, as clearly illustrated in extract 2, where T1 solicits the students’ confirmation that they have understood what he said.
Extract 2: Lecture 1
  • T1: We can assume that in Spain # or all the provinces # or the diputaciones # are # the same ## they have the same basic eh competences # even in # I don’t know # in eh # Andalusia # or Cataluña # or whatever # we can assume that.
  • T1: yes # yes # or not? [con] mm? [con]
These questions were formulated with the expression “Okay?” in 58.22% of all the comprehension check questions, and the interjections “mm?” and “eh?” Other forms such as “Yes or no?”, “yes?”, “No?”, “Any question?”, “Have you followed me?”, “Have you understood”? were also found, but were less frequent. However, a closer look at the confirmation check questions revealed that many of the questions reflected “the mechanized use of […] apparently instinctive structures belonging to the linguistic repertoire of the lecturer as filler expressions” [23] (p. 195), and, in fact, most of the time, the students did not reply to them.
The second most widely used type of questions was display, with a quarter of the average of comprehension checks per 1000 words (4.11‰). The display questions of our corpus tended to test the learners’ knowledge of specific aspects of the content of the lesson rather than broader aspects, indicating “that students’ limited answers are good enough for teacher’s questioning purposes and that, in fact, a minimal response is what lecturers are looking for” [23] (p. 199). This is why the lecturers frequently helped the students to formulate their answer by providing the first word of their response (extract 3) or limited the students’ answer to the minimum, as the students simply needed to fill in a missing word (extract 4).
Extract 3: Lecture 15
  • T4: bombing of the civil # of the Spanish civil war and probably one of the most famous: in spite of that # however # the bombing of Guernica is the most famous # in the world. Okay? [con] Why? [dis] (two students answer at the same time)
  • T4: okay # wait wait wait.
  • T4: well # first (pointing at S2) # because? [dis]
  • S2: it was the first civil bombing.
  • T4: no # Durango was first.
Extract 4: Lecture 3
  • T3: and # in this kind of system # the peasants # or the farmers # had to provide the government with? [dis]
  • S2: labour work? [con]
  • T3: labour services.
  • S: eh.
  • T3: okay. Labour services.
Furthermore, the lecturers’ tendency to elicit succinct answers from the students was also observed in the so-called chain questions, in which lecturers produced a succession of questions in which the scope of the question narrowed down from an initially open question to a closed-ended question (see [23] (p. 42), for a similar observation). This is illustrated in extract 5 where the more general question, “What happened with England” is followed by the very specific question “When was England # the major power in Europe?”
Extract 5: Lecture 2
  • T2: What happened with # eeeh # England? [dis]
  • T2: When was England # the major power in Europe? [dis]
  • T2: when? [dis]
  • S: Seventeen # eighteen.
  • T2: Seventeen # seventeen century # from the seventeen century # as a result of the first # civil war # remember eeeeh # Cromwell # okay? [con]
  • T2: and # as a result of the # various # revolution in 1688 # okay? [con]
Whether the students were conditioned by the nature of the lecturers’ questions, or it was the students’ natural tendency not to intervene with long utterances, students’ replies to display questions, and in general to all kinds of questions, were brief [29,30]. Extract 6 is representative of the students’ replies, where the contrast between the student’s one-word answer on the one hand, and the lecturers’ rephrasing of it, on the other, is very illustrative.
Extract 6: Lecture 1
  • T1: So # which was the decision? [dis]
  • S: Vicekingdoms.
  • T1: Vicekingdoms.
  • T1: So # to create # two figures # of alter egos of the king ### whose seat will be # two American cities # and they will # be # at the top # of that system # eh in # in in in a America.
Referential questions are questions whose answer is not known by the teacher [13] (p. 32). In our corpus, their main function was to make the content of the lecture more relevant to the students by several means such as shifting the focus to the students themselves, enquiring about the student’s opinion rather than the content itself (extract 7), establishing connections between the lectures and the students’ opinions (8), and referring to their personal life (extract 9) and experiences (extract 10).
Extract 7: Lecture 2
  • T2: Why is important the public debt? [dis]
  • T2: What do you think it’s important? [ref]
Extract 8: Lecture 3
  • T3: And the problem is # what would happen # if # once you have invested # a lot # of your work # of your human effort # and even of your money in the improvement of your plot # […] at the end of the year # the emperor tells # perfect # now # we are going to distribute # the land # for the next year # which is going to begin # right now # what would happen then? [dis]
  • T3: What would you feel? [ref]
  • T3: What would you think in that case? [ref]
  • T3: Wouldn’t you feel cheated? [ref]
  • T3: What would you think? [ref]
  • T3: Would you like that? [ref]
Extract 9: Lecture 3
  • T3: So # what do you think # rural industries mean? [ref]
  • S: eh # a (??) industry that provides # em ### xxx to the # to the # rural world.
  • T3: Not really.
  • T3: Where do you come from # xxx? [ref]
Extract 10: Lecture 3
  • T3: Have you ever heard # the expression (writing on the board) paddy fields [ref]?
  • T3: Have you seen # the movies about Vietnam? [ref]
  • T3: eh? [con]
  • T3: Have you ever seen # movies such as “Rambo” # “Apocalypse now” # and so on? [ref]
Referential questions were also a means to create a relaxed atmosphere in which the lecturer does not make any assumptions about the students’ knowledge, and therefore, any reply is welcome, as illustrated in extract 11.
Extract 11: Lecture 6
  • T3: Do you know the # theee # history of the discoveries # no # Ana? [ref]
  • T3: Why they say that Columbus was probably a xxx idiot? [dis]
  • S: Because # er # he thought he found # India.
  • T3: Yeah but is more # is even more interesting than that.
  • T3: You know the # you know # the problems Columbus had with the Portuguese crown? [ref]
  • T3: You know the history? [ref]
  • T3: Is really interesting # because Columbus thought…
Sánchez-García [23] noted that most of the self-answered questions occur within long monologues and serve to “introduce a new topic or explain and deliver new discipline content” (p. 208) and organise the discourse. The self-answered questions we found in our corpus were also part of long teacher-fronted interventions, and fulfilled the functions stated by Sánchez-García. For example, the self-answered question introduced a new concept in extract 12, it served to mark the relevance of some aspect of the message in extract 13, and facilitated student comprehension by breaking the message into chunks that are easier for the students to process in extract 14. Among the self-answered questions, those headed by the interrogative pronoun “why” were quite prevalent (extract 15), which reveals the importance that lecturers attributed to the cognitive discourse function “explain,” according to which knowing why a historical event happened is as important as telling what happened [31].
Extract 12: Lecture 4
  • T1: Eh # what is the astrolabe? [sel]
  • T1: This is an astrolabe. (showing a picture)
  • T1: xxx device made of xxx metal # eeeeeh # xxx # xxx # object (??) # but # always # have these features # with different # eeeh # eh # elements # that are used for what? [sel]
  • T1: Well # basically # for measuring the time.
Extract 13: Lecture 3
  • T3: What I am trying # to tell you with this? [sel]
  • T3: We are # at the end # of # an economy # of a (¿?) # kind of Chinese economy # based # in the closely # in the close involvement # of the state # in the economic affairs
Extract 14: Lecture 4
  • T1: so as you can see # changes in technology # some of them imported from # outside Europe # others # created in Europe # that led to what? [sel]
  • T1: to huge development # during the fourteenth and sixteenth centuries # in # the art # of ship making # mm? [con]
Extract 15: Lecture 16
  • T1: to a certain extent # the main # element # of contact # not only contact # of control # to the different empires in the Americas was made at the beginning # through # the activity of piracy.
  • T1: why? [sel]
  • T1: because something interesting is that # no # European # country # had # a permanent army # in the Americas # up to the eighteenth century.
The lecturers’ questions posed as requests for repetition were usually triggered by the physical conditions of the classroom (e.g., street noise, bad acoustics), students not speaking loud enough or the need to clarify what the student had said. The lecturers used the interjections “Eh?”, “What?”, Mm?”, “Sorry?” or rising intonation as an invitation to the student to provide the expression that was not understood (extract 16). Fully fledged interrogative questions were not used probably as a result of the Spanish L1 influence. However, since the students’ participation was low, requests for repetitions were rare.
Extract 16: Lecture 10
  • T3: Let’s see the economics students.
  • S: the consumers # eeeh # xxx better # but # in contrast # eh another countries # eh # thee they can lost their # their own # mm # economy system # like Argentina is a # very good # country # but
  • T3: very? [rep]
  • S: is a very # good country.
  • T3: very? # sorry? [rep]
  • S: good.
  • T3: well # yeah.
As stated above, retrospective questions were used sparingly, and were asked to revise the course materials, to dwell on the subject content that had already been covered in previous classes (extract 17), and to refer to the general knowledge the lecturer believed the students to have. All the instances of retrospective questions were affirmative statements with the verb remember uttered with rising intonation, as opposed to an interrogative yes or no question (e.g., “do you remember…?”).
Extract 17: Lecture 11
  • T4: and # eeeem # well # mm # remember the # the scene # in which # the camera ## lingers # on the # on the # plates? [ret]
Procedural questions referred “to the development of the lesson” [13] (p. 32) and included questions posed to: (1) Check whether the students had done the assignment (e.g., watch a video, write an essay; extract 18); (2) Ask volunteers to elicit an answer to a question asked by the lecturer; (3) Pool the students’ opinion on the timing for a break during class (extract 19); (4) Ask the students to speak louder; and (5) Ask questions, occasionally in the lecturers’ L1, while facing technical problems.
Extract 18: Lecture 11
  • T4: Watch # the movie? [pro]
  • T4: and you? [pro]
  • T4: okay.
  • T4: have you # written a # paragraph or something # or not? [pro]
  • S: no.
  • T4: just # no.
  • T4: okay only Ricardo? [pro]
Extract 19: Lecture 3
  • T3: but we will deal with that # eh # in the third part of the lesson # okay? [con]
  • T3: so # what about a little break? [pro]
Most off-task questions were related to time keeping, other courses the students may be enrolled in, and a bit of chit chatting. Questions asked to enquire the students’ names were also considered off-task, but since the lecturers did not normally address the students by their names, the use of off-task questions for this purpose was rare (extract 20).
Extract 20: Lecture 16
  • T1: and actually we are going to start # a long long weekend. […] Do you # do you have any # any class # any lesson tomorrow? [off]
  • S: yes.
  • T1: yes # ah # all right. What time? [off] what time? [off]
  • S: three o’clock.
Indirect questions, the questions designed to exemplify some situation but which are not expected to obtain a response, were scarce in our corpus. Extract 21 illustrates this kind of question type.
Extract 21: Lecture 16
  • T1: is like …actually # who is the best # what is the best # way # of stopping # the activity # of # such a powerful man # that can ## create # the whole army # and the whole ## navy # not not not army # but navy # against you in the Caribbean? [dis]
  • T1: well # kill him? [ind]
Similarly, rhetorical questions were very rare (0.02‰), and the two tokens we found were posed by one of the lecturers, T3, in the same utterance (extract 22). These questions were used to introduce a new topic for the students, however, given the small sample of the rhetorical questions in the data, these questions may carry out other functions that we are not aware of.
Extract 22: Lecture 10
  • T3: so why did Britain remain committed # eeeh # to free trade? [rhe]
  • T3: why did Britain remain # focused # on the promotion of free trade # until 1931? [rhe]
  • T3: and this is what we are going to address right now.
  • T3: okay? [con]
  • T3: and to do that ## we will have to take a glance ## sorry (going through slides) ## to this lovely table (going through slides) ### which is probably the Capilla Sixtina [Sistine Chapel] # of the # economic history temples.
There was one single instance of a language question from the lecturers to the students (extract 23). This occurrence represents 0.01‰ of all the questions in the corpus. Consequently, the lecturers did not seem to face language difficulties in general which may have prompted a question or at least it seems that they had the necessary resources to avoid any language-related issues.
Extract 23: Lecture 13
  • T4: Not to # eh how do you say this # to paños calientes? (the English equivalent to “not to go in for half measures”) [lan]

5.2. Second Research Question: Are the Questioning Tendencies Subject to Individual Differences among the Lecturers?

A breakdown of the questions asked by each of the lecturers revealed the existence of individual differences, as shown in Table 4. Due to the variability in word count of the lectures, and in order to facilitate the comparisons among them, the data have been normalised to 1000 words. The number of tokens for each question uttered by the lecturers is provided between parentheses.
T1′s questions were mainly confirmation checks usually formulated with the interjection “mm?” (10.77‰), and display questions as the second most widely used question category (3.06‰). He produced the highest number of self-answered questions of the four lecturers (1.62‰) and of off-task questions (2.58‰), although these questions represented a very small percentage of the total amount of questions. If confirmation checks are disregarded, he asked the least number of questions of all the lecturers.
T2 asked considerably more questions than the other lecturers. His main trait was the use of confirmation checks, in particular those formulated with the expression “okay?” (50.83‰), although these questions did not receive an answer for the most part. If confirmation checks are disregarded, he used display and self-answered questions the most, like T1. Hence, T1 and T2 showed similar behaviours in the interactions with the students.
T3 produced twice as many questions as the rest of the lecturers, if the confirmation check questions are disregarded. In particular he used display questions (6.59‰) and referential questions mainly. Repetition questions were somewhat more frequent in his case as he adopted more forward strategies to promote student participation than the other lecturers. In fact, he was the only lecturer who called on specific students by name to respond to a question. Perhaps there is a relationship between the fact that he called on the students by their names and the fact that he asked the most questions, in particular referential questions, which were not content related strictly speaking (e.g., when he asked them about students’ personal view on a particular issue).
T4 asked the lowest amount of questions. If confirmation checks are disregarded, his speech is characterised by the use of referential questions the most and display questions next. He also used procedural questions more than T1 and T2, although they were only 0.30‰ of all his questions. T3 and T4 obtained the highest rates of student participation according to our class observations and the recordings.

6. Discussion

The first guiding research question of this study addressed the existence of general tendencies in the questions posed by the EMI history teachers during lessons. We based the analysis on the taxonomy put forward by Sánchez-García [23]. The first conclusion of the study is that, although lecturers did not ask many questions, we did observe some tendencies in their behaviour. In particular, they resorted to instructional question types (i.e., questions related to content) primarily, while regulative questions, those related to classroom procedures such as procedural and off-task questions, were extremely rare (see [13] for similar results in EMI teacher-fronted classes in Madrid). Out of the nine different instructional question categories, confirmation checks were clearly the most widely used question type, followed at a considerable distance by display, referential and self-answered questions. The rest of the instructional question categories posed by Sánchez-García [23] were extremely rare. It is noteworthy that within the regulative questions, procedural questions having to do with technical issues were slightly more likely to elicit the use of the participants’ L1, a language which was never used in class. This may be due to an understanding that, as in these situations the content-matter was not the focus of the question, perhaps a relaxation of the use of the L2 was justified, not to mention the propensity to release one’s emotions such as the anxiety caused by the technological complications in one’s L1.
Two additional conclusions can be drawn from the first research question on the question categories. The second conclusion is that the three most prevalent question categories, namely, confirmation check, display and referential questions, do not seem to be posed to fulfil their intended pedagogic goals. Thus, while it might appear that the lecturers constantly checked student comprehension, most of the confirmation questions were mere mechanical filler expressions devoid of meaning. Most of the display questions in our corpus were closed-ended, or started out as open-ended in a chain of questions that led to a closed-ended question, requiring a simple short answer by the students. Furthermore, despite the lack of complexity of the potential student response and the fact that the lecturers quite frequently provided cues to the answer, the students’ responses to the display questions were short or were not provided, as found in other EMI settings where students’ unwillingness to ask questions seems to be exacerbated by linguistic limitations and their fear of speaking in front of the class [14,15,19,20,29]. In addition, finally, referential questions which normally dealt with students’ opinions, personal habits and thoughts did not trigger more lengthy contributions in our study either. As stated by Sánchez-García [23] (p. 199) on referential questions:
more sophisticated, extensive and lengthy contributions would be expected, precisely because these questions address students’ personally and allow more assorted and tailor-made replies promoting on many occasions out-of-the-box critical thinking. Surprisingly, students’ output remains stagnant concerning referential enquiries and do not entail any observable change in terms of length or verbal complexity.
Hence it appears that confirmation checks, display and referential questions are unlikely to achieve learning gains in our context. The behaviour associated with these question categories could be derived from a number of reasons some of which lay outside the lecturer’s zone of control such as the lack of knowledge on the student’s part, or the closed nature of the display questions which may not be conducive to students’ participation [23] (p. 200) counter to the lecturers’ beliefs (more on this below when dealing with our study’s limitations), or other factors attributable to the EMI context (e.g., such as the students’ insecurity in the productive skills in the L2). However, EMI lecturers in general can modify certain aspects to make these questions more effective. One aspect has to do with the wait time provided by the lecturers. It has been shown that teachers do not usually provide enough wait time for the students to respond (Smith et al., 2003, cited in [28]). This aspect is very important, especially in an EMI context, where students may require longer wait time due to the language barrier.
The third conclusion derived from the results of our study is that lecturers would benefit from receiving some linguistic and methodological (e.g., the kinds of questions that contribute more to learning) training, in particular in association with the formulation of retrospective and repetition questions. It is quite interesting to note that none of the retrospective questions and many of the repetition questions were formulated correctly by the lecturers. In fact, all the instances of retrospective questions were formulated with the verb remember embedded in an utterance with rising intonation, mirroring the Spanish syntactic pattern and a few interjections were used as repetition questions. Since the linguistic patterns used by the lecturers may serve as models that students incorporate into their own repertoire, it is essential that lecturers are reminded of the importance of formulating their questions correctly. In this regard, we believe that lecturers would benefit from the observations of their own classes and from training sessions dealing with the specific language issues detected during their viewing.
With regards to the second research question, our study revealed that each lecturer had specific traits in their production of questions. Firstly, we noted that, if comprehension checks are not taken into account, there were remarkable differences in the number of questions posed by the lecturers. T3 asked the most questions by far, 15.54 per 1000 words, then T4 and T2 with considerably fewer questions (7.16‰ and 7.04‰, respectively); and T1 only 6.16%. The implications of this difference should be researched, but our results seem to indicate that customised training sessions could be very productive as teachers’ questioning practices vary considerably.
Secondly, all the lecturers used the same question categories: comprehension checks, display, self-answered and referential primarily, although to a different extent. Once again, if confirmation checks are disregarded, T1 and T2 used display and self-answered questions the most, whereas T3 and T4 used display and referential questions the most, although in different proportions. These differences have an impact on the teacher-student interaction as T3 and T4 obtained the highest student response rates, although it was still low. When compared with the results obtained in a different EMI discipline (e.g., Business administration), confirmation checks, display and referential were also the most frequently used questions by the lecturers [23]. Our results concur with Chang’s [21], who also observed that the use of questions was similar irrespective of the disciplinary culture. Hence, it would be interesting to find the common ground among all the teachers and learn from the most successful strategies used by the lecturers. At the same time, there are also differences among the lecturers, which beg the question if these differences are the result of personal teaching style differences, specific teaching goals and/or are related to the EMI context. This is an issue that needs to be researched.
Finally, we believe that the extensive use of comprehension checks should be further examined. All the lecturers used comprehension check questions very frequently. This was particularly the case of T2. Whether this tendency is specific to the EMI context or a truly personal trait also needs to be addressed. It may be the case that underlying this tendency is a sense of insecurity related to the teacher’s self-perceived level of competence in English [32] and therefore this is an issue that would need to be resolved.

7. Conclusions

Our study suffers from some limitations. First, we did not focus on students’ answers because our study was aimed at analysing only the questioning practices of EMI teachers and, importantly, students’ answers were not always understandable in the recordings (i.e., due to noise in the background or students speaking in a low voice), which prevented us from carrying out a detailed analysis of them. Further research should aim at examining the impact of EMI teachers’ questions on students’ answers. Second, we did not study teachers’ beliefs, and this is an interesting avenue for future research, as their beliefs may influence their instructional practices and interactions with students. In addition, it would be worth considering whether different disciplines have any kind of impact on teachers’ questioning practices, in other words, whether the questions posed by EMI lecturers from different disciplines differ from each other (for more on this, see [10]).
The systematic observation and analysis of teacher questioning behaviours is crucial as it can help us to reach a better understanding of gains in student achievement [13]. This is why it is important to analyse teacher–student interaction. An important issue that needs to be addressed is EMI teachers’ need to improve their use of questions, as our findings indicate that the negotiation of meaning is not as rich as it should be. Hence, it may be useful to provide content teachers with some training and design teacher training courses that “include activities to develop interactional and multimodal competences” [33] (p. 320) in order to enhance student participation and promote learning. Similarly, teachers need to become aware of the fact that high-level questions facilitate deep comprehension [34], which is why display and referential questions should be used much more frequently in EMI classes. High-level questions are not yes-or-no questions, they never have an obvious answer, nor have they only a single answer. They cannot be answered only by simple recollection or by quoting directly from a written or oral text. High-level questions foster critical thinking because they expect students to analyse, synthesise and evaluate information, instead of just recalling a particular fact. For example, if the teacher asks “When was the French revolution?” (a display question), the answer would simply be a date, but if the same teacher asks “What were the causes behind the French revolution?” (also a display question, but much more cognitively demanding), students are forced to compare and contrast information, make judgements, explain reasons for their judgements, and develop reasoning. The latter question would be labelled as a high-level question, whereas this is not the case of the first question. Both types of questions fulfil an important role in class, but EMI teachers need to realise that high-level questions foster interaction and thinking skills to a much higher degree.

Author Contributions

Conceptualization, A.D. and D.L.; methodology, A.D. and D.L.; software, A.D. and D.L.; validation, A.D. and D.L.; formal analysis, A.D. and D.L.; investigation, A.D. and D.L.; resources, A.D. and D.L.; data curation, A.D. and D.L.; writing—original draft preparation, A.D. and D.L.; writing—review and editing, A.D. and D.L.; visualization, A.D. and D.L.; supervision, A.D. and D.L.; project administration, D.L.; funding acquisition, D.L. All authors have read and agreed to the published version of the manuscript.


This work is part of the following research projects: PID2020-117882GB-I00 (Spanish Ministry of Science and Innovation) and IT1426-22 (Basque Government).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is unavailable due to privacy restrictions.


We would like to express a special debt of gratitude to Kebir Colmenero for his help with validation.

Conflicts of Interest

The authors declare no conflict of interest.


Transcription conventions
[…]some part of the transcription is skipped for the sake of space
[dis]display question
[ref]referential question
[lang]language question type
[con]confirmation question type
[ret]retrospective question type
[self]self-answered question type
[rhet ]rhetorical question type
[ind]indirect question type
[proc]procedural question type
[off]off-task question type
Xxxunintelligible speech
( )translation of Spanish terms/words into English
In boldour emphasis
SStudent (since the students were not the focus of this study, all the students were referred to as S)


  1. Kawalkar, A.; Vijapurkar, J. Scaffolding science talk: The role of teachers’ questions in the inquiry classroom. Int. J. Sci. Educ. 2013, 35, 2004–2027. [Google Scholar] [CrossRef]
  2. Al-Adeimi, S.; O’Connor, C. Exploring the relationship between dialogic teacher talk and students’ persuasive writing. Learn. Instr. 2021, 71, 101388. [Google Scholar] [CrossRef]
  3. Chin, C. Teacher questioning in science classrooms: Approaches that stimulate productive thinking. J. Res. Sci. Teach. 2007, 44, 815–843. [Google Scholar] [CrossRef]
  4. Rose, H.; Mckinley, J. Japan’s English-medium instruction initiatives and the globalization of higher education. High. Educ. 2018, 75, 111–129. [Google Scholar] [CrossRef][Green Version]
  5. Lasagabaster, D. English-Medium Instruction in Higher Education; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  6. Howe, C.; Hennessy, S.; Mercer, N.; Vrikki, M.; Wheatley, L. Teacher–student dialogue during classroom teaching: Does it really impact on student outcomes? J. Learn. Sci. 2019, 28, 462–512. [Google Scholar] [CrossRef]
  7. Lo, Y.Y.; Macaro, E. The medium of instruction and classroom interaction: Evidence from Hong Kong secondary schools. Int. J. Biling. Educ. Biling. 2012, 15, 29–52. [Google Scholar] [CrossRef]
  8. Pun, J.; Macaro, E. The effect of first and second language use on question types in English medium instruction science classrooms in Hong Kong. Int. J. Biling. Educ. Biling. 2019, 22, 64–77. [Google Scholar] [CrossRef]
  9. Van der Wilt, F.; Bouwer, R.; van der Veen, C. Dialogic classroom talk in early childhood education: The effect on language skills and social competence. Learn. Instr. 2022, 77, 101522. [Google Scholar] [CrossRef]
  10. Lasagabaster, D.; Doiz, A. Classroom interaction in English-medium instruction: Are there differences between disciplines? Lang. Cult. Curric. 2022, 1–17. [Google Scholar] [CrossRef]
  11. Kyriacou, C.; Issitt, J. What Characterizes Effective Teacher–Pupil Dialogue to Promote Conceptual Understanding in Mathematics Lessons in England in Key Stages 2 and 3? EPPI-Centre Report no. 1604R; Social Science Research Unit, Institute of Education, University of London: London, UK, 2008; Available online: (accessed on 12 May 2022).
  12. Rojas-Drummond, S.; Mercer, N. Scaffolding the development of effective collaboration and learning. Int. J. Educ. Res. 2004, 39, 99–111. [Google Scholar] [CrossRef]
  13. Sánchez-García, M.D. Mapping lecturer questions and their pedagogical goals in Spanish- and English-medium instruction. J. Immers. Content-Based Lang. Educ. 2020, 8, 28–52. [Google Scholar] [CrossRef]
  14. Airey, J. Science, Language and Literacy. Case Studies of Learning in Swedish University Physics; Acta Universitatis Upsaliensis; Uppsala Dissertations from the Faculty of Science and Technology: Uppsala, Sweden, 2009; Available online: (accessed on 8 May 2022).
  15. Airey, J.; Linder, C. Language and the experience of learning university physics in Sweden. Eur. J. Phys. 2006, 27, 553–560. [Google Scholar] [CrossRef]
  16. Mortimer, E.F.; Scott, P.H. Meaning Making in Secondary Science Classrooms; Open University Press: London, UK, 2003. [Google Scholar]
  17. Dafouz, E.; Hüttner, J.; Smit, U. New context, new challenges: Oral disciplinary language development in English medium instruction (EMI) and its implications for teacher education. TESOL Q. 2018, 52, 540–563. [Google Scholar] [CrossRef]
  18. Suviniitty, J. Lecturers’ questions and student perception of lecture comprehension. Hels. Engl. Stud. 2010, 6, 44–57. [Google Scholar]
  19. Engin, M. Contributions and silence in academic talk: Exploring learner experiences of dialogic interaction. Learn. Cult. Soc. Interact. 2017, 12, 78–86. [Google Scholar] [CrossRef]
  20. Tsou, W. Interactional Skills in Engineering Education. In English as a Medium of Instruction in Higher Education: Implementations and Classroom Practices in Taiwan; Tsou, W., Kao, S.-M., Eds.; Springer: New York, NY, USA, 2017; pp. 79–93. [Google Scholar]
  21. Chang, Y.-Y. The use of questions by professors in lectures given in English: Influences of disciplinary cultures. Engl. Specif. Purp. 2012, 31, 103–116. [Google Scholar] [CrossRef]
  22. Lammers, W.J.; Murphy, J.J. A profile of teaching techniques used in the university classroom. Act. Learn. High. Educ. 2002, 3, 54–67. [Google Scholar] [CrossRef]
  23. Sánchez-García, M.D. A Contrastive Analysis of Spanish and English-Medium Instruction in Tertiary Education: Teacher Discourse Strategies in a Spoken Corpus. Ph.D. Thesis, Universidad Complutense de Madrid, Madrid, Spain, 2016. [Google Scholar]
  24. Lasagabaster, D.; Doiz, A. Language Use in English-Medium Instruction at University: International Perspectives on Teacher Practice; Routledge: New York, NY, USA, 2021. [Google Scholar]
  25. Guarda, M.; Helm, F. A Survey of Lecturers’ Needs and Feedback on EMI Training. In Sharing Perspectives on English-Medium Instruction; Ackerley, K., Guarda, M., Helm, F., Eds.; Peter Lang: Bern, Switzerland, 2017; pp. 167–194. [Google Scholar]
  26. Council of Europe. Common European Framework of Reference for Languages (CEFR): Learning, Teaching, Assessment; Council of Europe: Strasbourg, France, 2020. [Google Scholar]
  27. Tobin, K. The role of wait time in higher cognitive level learning. Rev. Educ. Res. 1987, 57, 69–95. [Google Scholar] [CrossRef]
  28. Mujis, D.; Reynolds, D. Effective Teaching: Evidence and Practice, 3rd ed; Sage: London, UK, 2011. [Google Scholar]
  29. Evans, S.; Morrison, B. The student experience in English-medium higher education in Hong Kong. Lang. Educ. 2011, 25, 147–162. [Google Scholar] [CrossRef]
  30. Kim, E.G. English Medium Instruction in Korean Higher Education: Challenges and Future Directions. In English Medium Instruction in Higher Education in Asia-Pacific; Multilingual Education; Fenton-Smith, B., Humphreys, P., Walkinshaw, I., Eds.; Springer: New York, NY, USA, 2017; Volume 21, pp. 53–69. [Google Scholar]
  31. Doiz, A.; Lasagabaster, D. An analysis of the use of cognitive discourse functions in English-medium history teaching at university. Engl. Specif. Purp. 2021, 62, 58–69. [Google Scholar] [CrossRef]
  32. Doiz, A.; Lasagabaster, D. Teachers’ and students’ L2 motivational self-system in English-medium instruction: A qualitative approach. TESOL Q. 2018, 52, 657–679. [Google Scholar] [CrossRef]
  33. Morell, T.; Norte, N.; Beltrán-Palanques, V. How do Trained English-Medium Instruction (EMI) Lecturers Combine Multimodal Ensembles to Engage their Students. In La Docencia en la Enseñanza Superior Nuevas Aportaciones desde la Investigación e Innovación Educativas; Roig-Via, R., Ed.; Octaedro: Madrid, Spain, 2020; pp. 308–321. [Google Scholar]
  34. Cerdán, R.; Vidal-Abarca, E.; Martínez, T.; Gilabert, R.; Gil, L. Impact of question-answering tasks on search processes and reading comprehension. Learn. Instr. 2009, 19, 13–27. [Google Scholar] [CrossRef]
Table 1. The participants and their classes.
Table 1. The participants and their classes.
SubjectYears of Teaching ExperienceYears of Teaching in EMILectures Observed and RecordedNumber of Words Recorded
T1America in the modern age2571, 4, 1627,105
T2Early modern history I1622, 8, 921,027
T3World economic history2153, 6, 1027,155
T4Contemporary history of the Basque Country30211, 13, 1516,617
Table 2. Taxonomy of questions [13] (p. 32).
Table 2. Taxonomy of questions [13] (p. 32).
Instructional Questions (Related to Content)
DisplayThose to which the answer is known by the teacher.
ReferentialThose to which the answer is not known by the teacher.
RepetitionThose seeking repetition of the last word, idea, utterance, etc.
LanguageThose seeking assistance as regards language matters.
Confirmation ChecksThose aimed at ensuring understanding of the topic/lecture.
RetrospectiveThose which require the students to recall previous information.
Self-answeredThose which are immediately answered by the teacher.
RhetoricalThose to which no answer is expected.
IndirectThose which are not uttered to get a response but to exemplify some situation.
Regulative questions (related to classroom procedures)
ProceduralThose which refer to the development of the lesson and do not focus on the content/language, but on the lecture itself or a particular activity.
Off-taskThose which refer to a topic that departs from the main subject.
Table 3. Overall distribution of question categories.
Table 3. Overall distribution of question categories.
Total Number of Words:
Number of Questions
Confirmation check20.441879
Table 4. Overall distribution of questions by lecturer.
Table 4. Overall distribution of questions by lecturer.
27,105 Words
‰ (Questions)
21,027 Words
‰ (Questions)
27,155 Words
‰ (Questions)
16,617 Words
‰ (Questions)
Confirmation check10.77 (292)50.83 (1069)13.40 (364)9.26 (154)
Display3.06 (83)4.04 (85)6.59 (179)1.86 (31)
Referential0.81 (22)0.99 (21)5.81 (158)2.16 (36)
Self-answered1.62 (44)1.23 (26)1.32 (36)1.02 (17)
Repetition0.07 (2)0.23 (5)0.92 (25)0.54 (9)
Retrospective0.18 (5)0.38 (8)0.18 (5)0.84 (14)
Procedural0.03 (1)0.14 (3)0.33 (9)0.30 (5)
Off-task2.58 (7)0.09 (2)0.18 (5)0.18 (3)
Indirect0.11 (3)0.04 (1)0.11 (3)0.18 (3)
Rhetorical 0.07 (2)
Language 0.06 (1)
TOTAL16.93 (459)57. 87 (1220)28.94 (786)16.42 (273)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Doiz, A.; Lasagabaster, D. An Analysis of the Type of Questions Posed by Teachers in English-Medium Instruction at University Level. Educ. Sci. 2023, 13, 82.

AMA Style

Doiz A, Lasagabaster D. An Analysis of the Type of Questions Posed by Teachers in English-Medium Instruction at University Level. Education Sciences. 2023; 13(1):82.

Chicago/Turabian Style

Doiz, Aintzane, and David Lasagabaster. 2023. "An Analysis of the Type of Questions Posed by Teachers in English-Medium Instruction at University Level" Education Sciences 13, no. 1: 82.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop