Next Article in Journal
Imagining and Reimagining the Future of Special and Inclusive Education
Next Article in Special Issue
Online Peer Assessment for Learning: Findings from Higher Education Students
Previous Article in Journal
COVID-19 Academic Integrity Violations and Trends: A Rapid Review
Previous Article in Special Issue
Towards a Framework to Support the Implementation of Digital Formative Assessment in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rubric’s Development Process for Assessment of Project Management Competences

1
ALGORITMI Research Centre/LASI, Department of Production and Systems, School of Engineering, University of Minho, 4800-058 Guimarães, Portugal
2
School of Engineering, Federal University of Rio Grande (FURG), Rio Grande 96201-900, Brazil
3
Research Centre for Human Development (CEDH), Faculty of Education and Psychology, Universidade Católica Portuguesa (FEP-UCP), 4169-005 Porto, Portugal
4
School of Medicine, University of Minho, 4710-057 Braga, Portugal
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2022, 12(12), 902; https://doi.org/10.3390/educsci12120902
Submission received: 20 October 2022 / Revised: 27 November 2022 / Accepted: 7 December 2022 / Published: 9 December 2022
(This article belongs to the Special Issue Assessment and Evaluation in Higher Education—Series 2)

Abstract

:
Assessment rubrics are recognized for their positive effects, being defined as an evaluative instrument that establishes assessment criteria and performance levels. In this sense, assessment rubrics can be associated with professional practices for more authentic assessment processes. In the context of Project Management, the International Project Management Association (IPMA) has developed a framework that establishes the individual competences for professionals working in the area, the Individual Competence Baseline (ICB). The objective of this study is to propose a process of rubric development for competence assessment in Project Management. A rubric for Leadership competence was developed to show the applicability and relevance of the proposed process. The research methodology adopted in the study was Design Science Research. The application and evaluation of this rubric in a pilot study show that the rubric development process allowed the creation of a specific rubric for the assessment of leadership competence. This paper guides those who need to develop and assess project management competences, and it is intended to propose a replicable process for the other ICB competences.

1. Introduction

In a globalized, digitalized, and multicultural world that is intensely competitive and rapidly evolving [1], project management contributes to increasing the economic results of organizations. In Germany, as in other Western economies, 34.7% of work is organized in the form of projects [2], which suggests the importance of projects and project management [1].
Project management has become a global profession [3], where organizations engage in projects, programs, and portfolios, with cross-organizational, regional, national, and international boundaries. Thus, people working on projects must deal with highly diverse challenges involving external stakeholders and dealing with a wide range of factors, e.g., culture, language, socioeconomic status, and organization types.
In this sense, the project management profession requires a wide range of competences to be successful [4]. The International Project Management Association (IPMA) has developed the Individual Competence Baseline (ICB) [3], a global framework that defines the competences required by individuals working in the field of project, program, and portfolio management. It defines individual competence as the application of knowledge, skills, and abilities to achieve the desired results [3] but does not describe assessment methods in detail. Therefore, it is necessary to plan the assessment and develop an instrument to support the assessment of competences [5]. Moreover, the ICB presents a competence classification scheme in three domains: Perspective-focused competences, which respond to the context of projects; people-focused competences, which respond to personal and social topics; and practice-focused competences, which respond to the specific practices, methods, and tools of project management [3].
The increasing importance and diversity of project management competences indicate a clear need to develop methods for the assessment of competences in this area of knowledge [6]. This need is reinforced for team acquisition, professional development and for selection of people that effectively represent the competences [7,8], and in certification processes in which the assessment should be closely linked with professional practices [9,10]. Furthermore, Starkweather and Stevenson [9] state that there is a need for more diverse, innovative, and rigorous assessment methods: “The temptation to rely on a checkmark in the certification box as indicative of an imprimatur for project management success needs to be supplanted with a more rigorous and specific assessment of a candidate’s communication and decision-making abilities.” Thus, the commonly used assessment methods do not seem to be sufficient to select, hire, or develop professionals for professional practice. So, there is a need to propose approaches that can assess the project management professional, organizations, and certifying bodies for the real challenges of the job [11]. That being said, it is intriguing to note that the literature is scarce regarding the assessment of project management competences.
Considering that rubrics allow for an assessment based on rigorous criteria and performance levels developed according to the context of the assessment [12,13], professionals can rely on the results to develop and set goals to increase their performance. Thus, a process for developing rubrics to assess competences, based on a reference framework, may stand out in supporting an assessment instrument that will contribute to the development of project management professionals. Considering this need, this work aims to answer the following research question: “How to develop rubrics for the assessment of ICB project management competences, and in particular how to develop a leadership rubric?”
To answer this question, the objective of the study is to propose a process to develop rubrics for the assessment of project management competences, based on the ICB framework of competences. Additionally, this work aims to apply that process to develop one specific rubric for leadership, as a way to show the applicability of the process. The work was developed using Design Science Research, first to develop a process and then to apply the process for the development and evaluation of the rubric.

2. Theoretical Background

2.1. Assessment of Competences

Qualification frameworks such as the European Qualifications Framework (EQF) or the Assessment of Learning Outcomes in Higher Education (AHELO) are examples of studies that created a context for a generic definition of the required level of competences for different educational levels [14]. Nevertheless, there is still a need for methods and systems for the assessment of competences in specific contexts and specific areas of knowledge [15,16,17,18,19,20]. Although access to higher education qualifications has been increasing around the world, the diversity of international and inter-national contexts in the development of competences [14] creates a highly complex system for the assessment of competences.
The assessment of competences may be addressed by highly diverse methods and approaches, which may be useful in different contexts and settings [16] even if they are less structured or standardized. One may reflect upon cases where written tests, direct observations, or indirect observations may all be useful to address different competences. This leads one to consider that the utilization of several types of assessment methods and instruments, both in educational and professional settings [21], could, if practical, help in evaluating different parts of a competence or different types of competences. Interestingly, this may even include “spontaneous behaviors of individuals in the absence of a clearly delimited stimulus” [21].
A project manager’s competence may be assessed from two main perspectives [22], a deontological perspective—“right” action, focused on knowledge—or a consequentialist perspective—“best possible” outcome, focused on performance-based criteria. While these two approaches are not in full opposition, they are based on different perspectives that may influence the assessment process. As an example, one could say that a test would be mainly applied in the former perspective and a report of experience would be applied in the latter perspective.

2.2. Rubrics for Assessment

Rubrics are considered an effective approach to achieving reliable (consistent) and valid (accurate) professional judgment of performances [23]. Thus, rubrics may be considered important instruments in assessment processes [24], identified as tools that establish criteria and levels through a rating scale and can provide the most equitable and consistent assessment, reducing subjectivity [25]. Qualitatively, they describe and score the observable differences of the individual [12]. Following Dawson [26], rubrics are more than just a tool used to support evaluators in making decisions.
A rubric is usually structured as a matrix of rows and columns. The top row contains various levels of performance, for example, ranging from “unacceptable” to “exemplary.” The leftmost column of the rubric matrix consists of a list of criteria or components that is being assessed. Each cell is defined by detailed descriptors that explain the specific competences to be demonstrated, covering the full range of performance levels for each criterion [13]. The use of rubrics is widely recognized for their positive effects, most notably because they enhance the learning, teaching, assessment, and grading processes and are characterized as a simple tool for assessment support [27].
The use of rubrics is evidenced in a variety of contexts, such as in education, to evaluate healthcare professionals, and/or for a professional evaluation. Brookhart and Chen [28], in an extensive review of research conducted between the years 2005 and 2013 on the use of assessment rubrics in educational settings, state that rubrics allow for gathering information about what learners know and can do to improve their performances [27]. According to Andrade, et al. [29], the use of rubrics increases student performance, that is, with knowledge of evaluative criteria student, in a concrete way, to identify and thus mobilize the required competences. Thus, with the knowledge and perception of which evaluation criteria are important, the individual can develop and even improve his performance.
According to Dawson [26], the use of rubrics has increased both in research and practice, and the term has come to represent divergent practices. Since the beginning of its use in education, ‘rubric’ has not been a particularly clear term. In his study, he concluded that none of the 14 studies reviewed on rubrics to evaluate student work could be replicated. Papadakis et al. [30] created a rubric to evaluate the quality of educational apps for pre-schoolers. Using a sample that consisted of preschool teachers and undergraduate students in the department of preschool education and with the improvements of two experts, the rubric entitled BRubric for the EValuation of Educational Apps for Preschool Children (REVEAC) was developed. Tai et al. [31] corroborated this by stating that rubrics can be thought of as a scaffold to support the development of students’ evaluative judgement. In addition, Lo and Yang [32] developed a rubric associated with the simulation of childhood pneumonia to examine the learning efficacy of students using the rubric to assess the knowledge, skills, and attitudes in the simulation. The results indicate that the rubric can help students to develop their learning process in a more organized way, promote the development of their childcare performances, and help them in their future clinical care practices.
Regarding the use of rubrics for professional assessment, according to [33], the interest in using rubrics in business programs is growing rapidly [24]. In the professional field, countries such as the United States, United Kingdom, Australia, France, and Turkey, as well as globally recognized certification bodies, namely the Association to Advance Collegiate Schools of Business (AACSB); Association of MBAs (AMBA), and EFMD Quality Improvement System (EQUIS), have adopted the assessment of learning outcomes as an assurance and certification process and have contributed to the growing awareness of the importance of rubrics in professional education [12]. Accordingly, in project certification, there is an encouragement of individuals who work on projects to improve their knowledge and competences [3]. In this sense, the use of rubrics in the professional certification process is a viable solution to examine real situations, ensure the achievement of specified results, and contribute to professional development.
Thus, in either educational or organizational settings, the individual’s performance should increase with the understanding of quality criteria. In this study, for the development of the rubric, the criteria are defined and recognized worldwide by IPMA [3].
According to Arcuria, Morgan, and Fikes [24], Brookhart [34], and Popham [35], rubrics have some essential elements, namely, assessment criteria, performance levels, and the scoring strategy.
Assessment criteria are the factors considered when determining the quality of work/activity and should provide clarity for understanding the performance requirements. They are characterized as detailed explanations of what is to be achieved to demonstrate the level of performance of competence [12]. To evaluate and prepare future engineers, it is expected to identify job performance that will suggest success in the professional environment. However, when a rubric is used to evaluate the performance of professionals, the assessment criteria should be directly related to expected professional practices [18].
Performance levels have been minimally discussed in the literature, and there is no consensus on the ideal number of descriptive levels in a rubric [5]. According to Popham [35], three to five levels should be considered; Stevens and Levi [36] recommend a minimum of three levels; Brookhart [34] recommends a maximum of four levels. However, there is a consensus that they should be few and meaningful.
Regarding the scoring strategy, traditionally, when using a rubric, one should examine the criteria and assign a score according to the demonstrated competence of the appraisee [13]. In addition, the overall score may be averaged or a final score may be given, depending on the rater’s conception of the relative importance of each criterion [13].
Rubrics are generally categorized by two different aspects of their composition. The first aspect to consider is whether the rubric treats criteria one at a time or together, and the other is whether the rubric is general and can be used with a family of similar tasks or whether it is task-specific and applicable only to one assessment. It is these aspects that define the types of rubrics [24,27,34], which can be defined as analytic and holistic rubrics.
The design and development of rubrics are concurrently reviewed in the educational area. Educational researchers have studied aspects of rubric design and have come up with recommendations for development [12]. Accordingly, one may use the steps described in Table 1.

2.3. Individual Competence Baseline (ICB)

The rapid recognition of project management as a global profession has given rise to several frameworks that define the profession in terms of techniques, concepts, and tools [37]. These frameworks present areas of knowledge, processes, or competences that are important in project management practice [38]; in this sense, they play crucial roles in project management and the success or failure of a project is linked to their correct application [39].
Frameworks are used in certification processes and in the development and assessment of professional competences since individuals who can demonstrate the knowledge and competences evidenced in the frameworks are considered professionally competent [37].
In project management, although several frameworks are available, few are based on individual competences and specify the essential competences for the development of people in project environments [40]. The International Project Management Association (IPMA) has positioned itself to be the first to promote a global framework focused on individual competences in project management [40].
The Individual Competence Baseline is an excellent reference source for those seeking an option for more human-centered project management methods [3]. Furthermore, it has a comprehensive structure and was developed over three years by more than 150 experts from around the world. It describes a coherent inventory of competence elements that an individual needs to have or develop to successfully master the work package, project, program, or portfolio they have been assigned to manage [3]. This framework does not detail competences by specific roles (e.g., project manager or planning expert) but rather in terms of what is required in the domain of project, program, and portfolio management.
According to IPMA [3], the term “competence” is understood as the “application of knowledge, aptitude and abilities to achieve the intended results”. Knowledge is the body of information and experience that an individual possesses. Aptitude is defined as the specific techniques that an individual knows and enables him to perform a task. When individuals can effectively use knowledge and skills in a given context, then they demonstrate competence. For example, being able to successfully plan and manage a project schedule can be considered a competence.
The IPMA [3] ICB has twenty-eight competence elements needed to manage projects and is divided into three competence areas: Perspective, people, and practice [3], as represented in Table 2. These areas focus on different aspects of competence, contributing to a holistic perspective of the individual.
Practice-focused competences (13 elements) are the skills required for specific methods, tools, and techniques used in projects, programs, or portfolios to realize their success. People-focused competences (10 elements) consist of the personal and interpersonal skills needed to successfully participate in or lead a project, program, or portfolio. Perspective-focused competences (5 elements) include the reference for tools, methods, and techniques through which individuals interact with the environment, as well as the rationale that leads people, organizations, and societies to initiate and support projects, programs, and portfolios.
One of the objectives of the ICB-IPMA [3] is to serve as a basis for defining and supporting the development of the individual competences of professionals working in the area. In this sense, it is an excellent reference to contribute to research in the development and assessment of competences. Figure 1 illustrates the main competence elements defined by the ICB [3], i.e., the competence is described or explained through indicators, which are characterized by the key measures, which may be seen as descriptors of each indicator.

3. Research Methodology

The objective of this study is to propose a process to build rubrics for assessing project management competences, based on the ICB-IPMA framework. Additionally, this work intends to apply the proposed process to develop a specific rubric for Leadership competence. Even though any rubric for any IPMA competence would be useful to show the applicability of the process, Leadership was selected due to its importance in project management teams.
Thus, as the objective of the work induces the need to develop the process and evaluate it using a rubric, Design Science Research was adopted. Design Science Research (DSR) is a research approach in which the object of study is the design process, i.e., it simultaneously generates knowledge about the method used to design an artefact [41]. Research based on Design Science must produce a viable artefact in the form of a construct, model, method, and/or an instantiation [42]. The Design Science research method cycle according to Takeda, et al. [43] consists of 5 sub-processes:
(1)
Awareness of the problem: Refers to the understanding of the problem involved [42]. The result is the definition and formalization of the problem to be solved, its boundaries (external environment), and the necessary satisfactory outcome.
(2)
Suggestion: To suggest key concepts needed to solve the problem.
(3)
Development: To construct candidates for the problem from the key concepts using various types of design knowledge (when developing a candidate, if something unsolved is found, it becomes a new problem that should be solved in another design cycle).
(4)
Evaluation: Defined as the process of verifying the behavior of the artefact in the environment for which it was designed.
(5)
Conclusion: General formalization of the process and its communication to academic and professional communities.
Understanding the problematic elements of the study involved a broad literature review of the essential elements for the development of rubrics. The analysis of these elements aimed to identify the aspects that are defined and considered in rubrics for assessment [12,13,24,27,34]. Considering that instruments to evaluate Project Management competences that are associated with the professional practice were not identified, there was a suggestion to create a process for the development of rubrics in this context. This was followed by the development of the process based on the IPMA Individual Competence Baseline (ICB) framework, which is described in Section 4. The evaluation of the process was based on the application of the process to develop a specific rubric (Section 5), which is followed by the conclusion.
The development phase was focused on the most subjective part of the development of a rubric, selecting and defining the boundaries of the area of knowledge, which is the first step of the procedure presented above [12] (see Table 1). To comply with the objectives of this phase, the research team developed a process that should reduce the subjectivity and be replicable for all competences of the ICB-IPMA competences baseline.
For the evaluation phase, a rubric was designed and evaluated, following the instructions defined in the Rubric Development Process (described in Section 4). The evaluation involved five professionals (n = 5) with 5 to 18 years of experience. The professionals were selected considering the following criteria: They should have training and practical experience in project management and project leadership, including in assessment and competence development. Only one of the participants did not have a master’s degree, but he had a specialization in project management. The other four participants had Master’s degrees, specifically one in Physics, another in Project Management, and two in Industrial Engineering and Management.
The participants were interviewed online through a semi-structured script to collect data for content validation of the leadership competence rubric. Informed consent was obtained from all participants involved in the study, and only one of the researchers kept the required information for the purpose of analysis, guaranteeing the protection of personal data.
The objective was to collect the perception of the professionals regarding their agreement and/or disagreement with the elements of the rubric and suggestions for improvement. During the interview, the professionals answered a questionnaire with general and specific dimensions using a five-point agreement Likert scale: 1—Strongly disagree, 2—Disagree, 3—Indifferent, 4—Agree, and 5—Strongly agree. In total, the questionnaire had 8 questions, specifically 3 global and 5 specific questions.
The general dimension included the following elements:
  • The rubric’s assessment criteria are relevant and measure important competences related to leadership practices in project contexts.
  • The rubric’s indicators do not address extraneous content and address all aspects of the intended content.
  • The rubric’s rating scale and performance levels are adequate for the assessment of the important indicators of leadership competence.
Regarding the specific dimension of the rubric, for each indicator, a question was developed: “The performance levels allow to identify all the Key measures of the intended performance, are clearly written and consistent across all scales (inadequate, lower than expected, satisfactory, good and excellent) of the rubric”.
To analyze the results of this questionnaire, Cronbach’s Alpha Coefficient and Content Validity Index (CVI) were calculated using the SPSS (Statistical Package for the Social Sciences) version 26 software.
Cronbach’s alpha coefficient is a technique used to evaluate the reliability and internal consistency of instruments [44], often referred to as the main reliability estimator [44,45]. The Content Validity Index (CVI) is a quantitative approach used to assess the content validity of an assessment instrument; this index measures the proportion of experts who agree on aspects of the instrument [46]. The index is calculated on a Likert scale and is measured through the item Content Validity Index (CVI-I) and the overall Content Validity Index (CVI-G), according to Equations (1) and (2).
According to Shrotryia and Dhanda [47], the Content Validity Index is widely used to determine content validity; however, it does not consider the amount of agreement that may occur due to chance among experts. The acceptance interval of the index follows, as proposed by [46] with CVI ≥ 0.80.
C V I I = N u m b e r   o f   P r o f e s s i o n a l s   a g r e e i n g   o n   t h e   i t e m T o t a l   N u m b e r   o f   P r o f e s s i o n a l s
C V I G = C V I   o f   e a c h   i t e m N u m b e r   o f   i t e n s
Finally, this rubric was applied in a pilot study, and an improvement to the leadership rubric was developed. This was developed in an online environment in January 2022. The online setting included a scenario with a leadership challenge where each participant should react and explain the way they should overcome such a challenge. Scenarios allow professionals to respond to challenges supported by their competences and to be evaluated according to the situation they are being confronted with [11,48].
The pilot study included the participation of 10 evaluated professionals and 2 experts acting as evaluators. The professionals were recruited mainly through the Master’s in Engineering Project Management from a Portuguese university, of which 4 were female and 6 were male, and 7 were Brazilian, 2 were Portuguese, and 1 was Colombian. All participants have knowledge and experience in project management and project leadership competences. Most have a background in engineering, and all had already participated in projects, either on a project team or assuming the leadership of the project.
The evaluators were two experts with experience in assessment. Expert 1 is a professor and researcher with more than 25 years of experience in the academic world, with high-impact publications, projects, and cooperation with universities and companies. The other expert is a professor and researcher with more than 15 years of research experience in competence development.
Each evaluator received the rubric in an Excel file to evaluate so that each evaluator could mark with “x” the proficiency levels of each professional and obtain the overall rating. The evaluators’ scores passed the ICC (Intraclass Correlation Coefficient) statistical test to measure the inter-rater reliability agreement. Discrepancies between scores were not discussed and a consensus was not sought.
At the end of the assessment process, perceptions regarding difficulties and suggestions concerning the use of the rubric were gathered from the evaluators.

4. Rubrics Development Process

This section aims to present the process developed to design rubrics for assessing ICB-IPMA project management competences. The process is mostly based on the work by Reddy [12], represented in the left part of Figure 2, improved with processes or details for operationalizing the development of rubrics less ambiguously. Figure 2 illustrates the overall process that will be described in the remaining parts of this section.
Following this study, a process was developed that can be replicated and applied in different project management contexts and for different competences. The following sections present the rubric development process.

4.1. Identification of the Competences and Evaluation Criteria

To be able to identify competences and criteria of the project management area of knowledge, one may use the definition, indicators, and key measures described by the ICB-IPMA. Furthermore, this framework may be used as the basis for the application of a replicable process, contributing in this way to reducing the usual ambiguity of this part of the rubric’s development.
As can be seen in Figure 1, for each competence element of the ICB-IPMA, there is a set of key indicators that may be used as the main criteria to assess each competence. The indicators are observable elements that allow an understanding of the individual’s performance in general. To obtain more specific observable elements of competence, one can use the key measures defined in the ICB-IPMA. The measures were characterized with a set of evaluative elements and described ways to specifically satisfy each indicator. For a given indicator, one can have, for example, three, four, or even five performance key measures that characterize it. According to IPMA [3], the ICB framework, including indicators and key measures, may be seen as a “companion on the journey of lifelong individual progression, from self- or external assessment of actual competence level, through the definition of desired development steps to the evaluation of achievements.”
Following this discussion, one may define a process for the definition of the criteria to assess each competence defined by the ICB-IPMA. Accordingly, Figure 3 presents the process diagram, designed with the BPMN standard language [49], which can be replicated for the definition of all project management competences.

4.2. Identification of Levels of Performance

Several levels must be defined for the interpretation and judgement of the participant’s performance. As mentioned before, there is no fixed reference about the exact number of levels that must be included in a rubric. However, the number of levels on the rating scale should be developed consistently and should be applied to diverse types of competences. Taking these ideas as a starting point and considering that the rubric would be used by professionals, and for professionals, in highly diverse contexts and competences, the research team decided to select the following 5 levels of performance: Inadequate (1); lower than expected (2); satisfactory (3); good (4); and excellent (5). This scale was also analyzed later in the evaluation phase by professionals and experts.

4.3. Development of Descriptive Scoring Schemes

After the definition of the performance levels, it was necessary to develop descriptive scoring schemes for each level, considering that this is an analytical type of rubric.
Part of this phase was dedicated to defining a way to quantitatively assess individual competences. For a rubric to produce a rating, a procedure or set of rules must be defined, which often involves the distribution of weightings per criterion. In this sense, an understanding of the level of performance for each criterion and the level of performance, including a global assessment, will be necessary [17,27]. The scoring scale considers the following aspects:
  • Each performance level assigns a numeric level from 1 to 5 to the competence indicator/criterion. If there are several evaluators, the translation between the evaluation average of all evaluators is performed according to Table 3.
  • The competence assessment level is defined by the weighted average of all performance levels of competence indicators. This is of the utmost importance if, in some specific context, a criterion (indicator) has greater impact and/or significance in the assessment process, and in this case, different weights can be assigned.

5. Development, Application and Evaluation of a Rubric

A rubric was developed and applied in a pilot study for the evaluation of the proposed process. Even though any competence rubric could be selected to evaluate the development process, the research team selected Leadership competence. This decision was influenced by the fact that it is considered of the utmost importance for the success of projects [3,50], and thus, could have a direct impact on the project management area, helping organizations to reach their business goals. Additionally, this could contribute to overcoming a gap in project management, as the authors did not find any article that addressed the development and use of rubrics for the assessment of leadership competences.
Thus, to develop the leadership rubric, according to the developed process, the first step (Section 4.1) was to identify the indicators and key measures for evaluation (Table 4).
The team developed two versions of the descriptive scoring schemes, to identify which one would fit the objective of the study best. The first version was characterized by longer writing, including all performance key measures. As this could be too long, a second version was created, with a shorter version of the wording, where the indicators were presented in another part of the rubric. The difference between both versions was evaluated during the evaluation phase/pilot study by professionals and experts.
An example of the two wording versions of the assessment criterion for leadership competence is shown in Table 5. In this example, one indicator has the following four performance key measures: Initiate actions and proactively offer help and advice: “Proposes or exerts actions”; “Offers unrequested help or advice”; “Thinks and acts with a future orientation (i.e., one step ahead)”; “Balances initiative and risk”. Thus, as can be seen, version 1 requires some art to create meaningful text versions that include all key measures.

5.1. Evaluation of the Leadership Rubric

In this phase, the developed rubric was evaluated by project management professionals (n = 5) who were interviewed, and data were collected about their perceptions. The results indicate that the assessment criteria, indicators—question 1, key measures—question 2, and the performance levels scale and wording—question 3, gathered a strong level of agreement. The results for the rubric’s assessment criteria—indicators and key measures, and rating scale and performance levels are illustrated in Figure 4. The full questions can be reviewed in Section 3.
In addition, the following five questions, related to each indicator, were checked: “Considering the [Indicator X] the performance levels are written clearly and consistently across all scales and allow to identify all the Key measures”. An analysis structure was created (Table 6), which aggregates the negative points of the scale (“disagree” and “strongly disagree”), the neutral point (“indifferent”), and the positive points of the scale (“agree” and “strongly agree”). The analysis of these results shows that none of the indicators received a negative classification, presenting a predominantly positive assessment. Nevertheless, indicator 2 “Take ownership and show commitment” shows a neutral level very similar to the positive one, which could indicate lower agreement with the wording of the scale. As the wording is directly associated with the ICB-IPMA framework, and no additional results indicate a need to change the scale, and thus, considering that there was no strong opposition, the research team decided to keep the scale.
The result of the internal consistency of the questionnaire presented a Cronbach’s alpha of 0.849, indicating good internal consistency [51]. The inter-expert agreement index was measured by the Content Validity Index (CVI) (Table 7).
The Validity Concordance Index at the level of each item, shows, for questions 1, 2, and 3, 100% agreement among professionals. For questions 4, 6, 7, and 8, acceptable agreement of 80% was observed, and for question 5, low agreement of 60% was observed among the professionals. For these low concordances, improvements were made to the rubric. The overall result indicated the Overall Content Validity Index (S-CVI) > 0.80, indicating overall agreement within acceptable parameters.

5.2. Application and Improvement of the Leadership Rubric

This section describes the main results obtained during the application and the final improvements to the rubric.
The rubric development process must include a pilot study that, in this case, focused on applying the rubric that is used as an example of the process for rubric development—the leadership rubric. This pilot study was developed with 10 professionals being assessed in a leadership scenario, where each one should give evidence about their competences by means of the way they will overcome the challenge set by the scenario. Additionally, two experts acted as evaluators and gave feedback about the process. This feedback was analyzed and, whenever considered useful, was used for the improvement of the rubric.
Regarding the difficulties experienced with the assessment rubric, the experts reported initial difficulties. To help deal with those initial difficulties, one of the experts proposed developing a simulation before applying the rubric.
“During the assessment process, I initially felt some difficulty to capture all the variants of the assessment elements. Because there are many indicators and many measures, it is difficult for us to be prepared to hear what is happening in the scenario, to observe, and at the same time link it to all those elements. So, in the beginning, it’s a little hard to get started, but after a while, you can handle the model.”
(Expert 1)
“As the process went on, I felt more comfortable and secure, I was even more agile in using the score sheet.”
(Expert 2)
“At the beginning using the rubric and being aware of the scenario, etc. Familiarizing myself in practice with the instruments helped. Hence, suggesting a simulation beforehand would help in preparation for the assessment process.”
(Expert 2)
Additionally, both evaluators added that they preferred to assess using the rubric model with a simple descriptive performance level—version 2 of Table 5.
“The main difficulty I felt was covering all the assessment measures and all the indicators. In my case, it was easier when I didn’t have to evaluate by measures.”
(Expert 1)
“I felt more comfortable with an assessment by indicator rather than by measure; however, the information from the measures was critical to a better understanding of each indicator.”
(Expert 2)
Several suggestions emerged from professionals during the evaluation phase and from the pilot study experts. Thus, two improvements were proposed for the final leadership rubric, namely:
(a)
Choice of the descriptive scoring scheme.
(b)
Changing the layout of key measures.
It was decided to group, using general wording, the performance for each criterion evaluated (version 2) so that the process could be replicated and standardized for other items. The wording containing all the measures (version 1) generates a more extensive grid and makes the assessment process more difficult, as mentioned by both experts.
The other proposal highlighted for improvement was the arrangement of the key measures in the rubric, highlighted by both experts. The change in the arrangement of the key measures emerges, therefore, from the evaluators’ indication of the processes of evaluating and observing according to the indicators. In summary, the performance key measures support the assessment process and, in this case, can be relocated to the bottom of the rubric.
The changes were considered important contributions mainly for the visualization of the assessment criteria. Compared to the previous version, improvements were sought so that the rubric would be able to represent the assessment criteria and the performance levels.
Thus, the final version of the project management rubric can be seen in Table 8, with the indicators (criteria) represented on the left of the rubric. The top row presents the level of performance, defined in the following way: Inadequate (1); lower than expected (2); satisfactory (3); good (4); and excellent (5). The cells present a short description of the performance for each indicator, thus establishing: “The demonstration of this indicator and its measures are {“not adequate”|“lower than expected”|“satisfactory”|“good”|“excellent”} [12].

6. Discussion

Different instruments can be used for competence assessment, such as written tests, interviews, portfolios, rubrics, and direct or indirect observations. These instruments and respective criteria should allow one to create links between theory and practice during the mobilization of competences [52]. Thus, the development of a design process for project management rubrics that can be replicated to develop diverse rubrics and assess different competences is an important contribution to the articulation between theory and practice. This is particularly evident because the rubric is based on a widely recognized framework built by professionals and has become an excellent way to effectively prepare professionals for their practice [18].
The quality of a rubric [12] may be verified by different means, including the enrolment of experts. In that sense, to ensure the quality of the proposed process, validation by experts contributed to important improvements to the final version of the instrument. Regarding the leadership rubric developed, the experts emphasized the contribution of the instrument to evaluate candidates, with no fundamental difficulty. They mainly refer to the idea that the experience with the rubric and the process makes it easier to apply it and that the less completed scale (version 2) is more intuitive.
According to Arcuria, Morgan, and Fikes [24], Brookhart [34], and Popham [35], rubrics should have some essential elements: Assessment criteria, performance levels, and a scoring strategy. Thus, evaluation criteria, indicators, and key measures were evaluated. Sought to contemplate the main indicators and key measures of project management, however, these criteria should be associated with the purpose and context of the evaluation, thus, it should analyze which of the evaluative criteria should be inserted in each context. Finally, the scoring scale allowed us to identify the performance level of each candidate/professional.
Assessment rubrics have benefits in different contexts, namely, evaluating students [26,28,29,30,31,32]; the evaluation of professionals [18,33]; and from an organizational perspective, the identification of needed competences [24]. From the perspective of the organization, such a process of development of rubrics may be used for the selection of new employees or internal evaluation for professional development purposes. Moreover, the use of rubrics, in accreditation processes, allows professionals to reflect and observe the evaluation parameters, directing their efforts to develop their competences.

7. Conclusions

This work presents a process of development of rubrics, which may be included in the class of assessment methods that try to capture the level of performance, linking this with the knowledge required to perform according to the expected professional or educational outcome [24,34]. From this perspective, this work proposed a process for the development of rubrics for the assessment of ICB-IPMA project management competences. The proposed process leans on the work of Reddy [12], adding a standard process to design rubrics for all ICB-IPMA competences in a replicable way. Neither the IPMA [3] describes ways to assess project management competences, nor, to the best of our knowledge, do other works define a process for the development of rubrics for project management competences in a systematic way.
The overall process of development was evaluated through the development of a specific rubric for Leadership competence, including the required objects of knowledge and criteria (proposed process); performance scale wording and valuation; feedback and validation with professionals; and piloting with experts and final improvement.
Moreover, the detail presented in the design, evaluation, and application of the leadership rubric may support other researchers, practitioners, or teachers to develop rubrics for other settings and areas of knowledge, namely, to support certification processes, selection and development of team members, and hiring new employees. In the academic environment, it presents a pedagogical contribution and can be used in different methodological approaches, such as in pre- and post-assessment of student competences. Moreover, it is characterized as a differentiated assessment instrument compared to traditional educational assessment instruments, such as written tests, by integrating assessment criteria associated with professional practices.
In future work, authors intend to develop rubrics to assess other competences defined in the ICB, applying the process for the development of rubrics and replicating it for other competences.

Author Contributions

Conceptualization, M.S., É.M., R.M.L. and D.M.; Methodology, M.S., É.M., R.M.L. and D.M.; Formal analysis, M.S., É.M., R.M.L. and D.M.; Investigation, M.S.; Writing—original draft, M.S. and É.M.; Writing—review & editing, M.S., R.M.L., D.M. and M.J.C.; Supervision, R.M.L. and M.J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This Work was supported by the FCT–Fundação para a Ciência e Tecnologia, Portugal, within the Project Scope UIDB/00319/2020.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the questionnaire examination by experts, who also assessed the ethical statements.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

The authors acknowledge by the Federal University of Rio Grande (FURG), and by FCT–Fundação para a Ciência e Tecnologia, Portugal for their support in the development of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Magano, J.; Silva, C.; Figueiredo, C.; Vitória, A.; Nogueira, T.; Dinis, M.A.P. Generation Z: Fitting project management soft skills competencies—A mixed-method approach. Educ. Sci. 2020, 10, 187. [Google Scholar] [CrossRef]
  2. Schoper, Y.G.; Wald, A.; Ingason, H.T.; Fridgeirsson, T.V. Projectification in Western economies: A comparative study of Germany, Norway and Iceland. Int. J. Proj. Manag. 2018, 36, 71–82. [Google Scholar] [CrossRef]
  3. IPMA. Individual Competence Baseline for Project, Programme & Portfolio Management. In International Project Management Association; IPMA: Nijkerk, The Netherlands, 2015; Volume 4, Available online: https://www.ipma.world/ (accessed on 1 March 2022).
  4. Burke, R.; Barron, S. Project Management Leadership: Building Creative Teams, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  5. Hagler, D.; Wilson, R. Designing nursing staff competency assessment using simulation. J. Radiol. Nurs. 2013, 32, 165–169. [Google Scholar] [CrossRef]
  6. Baartman, L.K.J.; Bastiaens, T.J.; Kirschner, P.A.; van der Vleuten, C.P.M. Evaluating assessment quality in competence-based education: A qualitative comparison of two frameworks. Educ. Res. Rev. 2007, 2, 114–129. [Google Scholar] [CrossRef]
  7. Petrov, M. An Approach to Changing Competence Assessment for Human Resources in Expert Networks. Future Internet 2020, 12, 169. [Google Scholar] [CrossRef]
  8. Oh, M.; Choi, S. The competence of project team members and success factors with open innovation. J. Open Innov. Technol. Mark. Complex. 2020, 6, 51. [Google Scholar] [CrossRef]
  9. Starkweather, J.A.; Stevenson, D.H. PMP certification as a core competency: Necessary but not sufficient. Proj. Manag. J. 2011, 42, 31–41. [Google Scholar] [CrossRef]
  10. Farashah, A.D.; Thomas, J.; Blomquist, T. Exploring the value of project management certification in selection and recruiting. Int. J. Proj. Manag. 2019, 37, 14–26. [Google Scholar] [CrossRef]
  11. Tinoco, E.; Lima, R.; Mesquita, D.; Souza, M. Using scenarios for the development of personal communication competence in project management. Int. J. Proj. Organ. Manag. 2022; forthcoming. Available online: https://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijpom(accessed on 1 October 2022).
  12. Reddy, M. Design and development of rubrics to improve assessment outcomes: A pilot study in a Master’s level business program in India. Qual. Assur. Educ. 2011, 9, 84–104. [Google Scholar] [CrossRef]
  13. Salinas, J.; Erochko, J. Using Weighted Scoring Rubrics in Engineering Assessment. In Proceedings of the Canadian Engineering Education Association (CEEA), Hamilton, ON, Canada, 31 May–3 June 2015. [Google Scholar]
  14. Zlatkin-Troitschanskaia, O.; Shavelson, R.; Kuhn, C. The International state of Research on Measurement of Competency in Higher Education. Stud. High. Educ. 2015, 40, 393–411. [Google Scholar] [CrossRef]
  15. Hatcher, R.; Fouad, N.; Grus, C.; Campbell, L.; McCutcheon, S.; Leahy, K. Competency benchmarks: Practical steps toward a culture of competence. Train. Educ. Prof. Psychol. 2013, 7, 84–91. [Google Scholar] [CrossRef]
  16. Van Der Vleuten, C.; Schuwirth, L. Assessing professional competence: From methods to programmes. Med. Educ. 2005, 39, 309–317. [Google Scholar] [CrossRef] [PubMed]
  17. Succar, B.; Sher, W.; Williams, A. An integrated approach to BIM competency assessment, acquisition and application. Autom. Constr. 2013, 35, 174–189. [Google Scholar] [CrossRef]
  18. Tobajas, M.; Molina, C.; Quintanilla, A.; Alonso-Morales, N.; Casas, J.A. Development and application of scoring rubrics for evaluating students’ competencies and learning outcomes in Chemical Engineering experimental courses. Educ. Chem. Eng. 2019, 26, 80–88. [Google Scholar] [CrossRef]
  19. Sillat, L.; Tammets, K.; Laanpere, M. Digital competence assessment methods in higher education: A systematic literature review. Educ. Sci. 2021, 11, 402. [Google Scholar] [CrossRef]
  20. Redman, A.; Wiek, A.; Barth, M. Current practice of assessing students’ sustainability competencies: A review of tools. Sustain. Sci. 2021, 16, 117–135. [Google Scholar] [CrossRef]
  21. Drisko, J. Competencies and their assessment. J. Soc. Work. Educ. 2014, 50, 414–426. [Google Scholar] [CrossRef]
  22. Bredillet, C.; Tywoniak, S.; Dwivedula, R. What is a good project manager? An Aristotelian perspective. Int. J. Proj. Manag. 2015, 33, 254–266. [Google Scholar] [CrossRef] [Green Version]
  23. Pellegrino, J.; Baxter, G.; Glaser, R. Addressing the “two disciplines” problem: Linking theories of cognition and learning with assessment and instructional practice. Rev. Res. Educ. 1999, 24, 307–353. [Google Scholar]
  24. Arcuria, P.; Morgan, W.; Fikes, T. Validating the use of LMS-derived rubric structural features to facilitate automated measurement of rubric quality. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge, New York, NY, USA, 4–8 March 2019; pp. 270–274. [Google Scholar]
  25. Shipman, D.; Roa, M.; Hooten, J.; Wang, Z. Using the analytic rubric as an evaluation tool in nursing education: The positive and the negative. Nurse Educ. Today 2012, 32, 246–249. [Google Scholar] [CrossRef] [PubMed]
  26. Dawson, P. Assessment rubrics: Towards clearer and more replicable design, research and practice. Assess. Eval. High. Educ. 2017, 42, 347–360. [Google Scholar] [CrossRef]
  27. Fernandes, D. Avaliação de Rubricas. Critério 2021, 1, 3. [Google Scholar]
  28. Brookhart, S.; Chen, F. The quality and effectiveness of descriptive rubrics. Educ. Rev. 2015, 67, 343–368. [Google Scholar] [CrossRef]
  29. Andrade, H.; Du, Y.; Mycek, K. Rubric-referenced self-assessment and middle school students’ writing. Assess. Educ. Princ. Policy Pract. 2010, 17, 199–214. [Google Scholar] [CrossRef]
  30. Papadakis, S.; Kalogiannakis, M.; Zaranis, N. Designing and creating an educational app rubric for preschool teachers. Educ. Inf. Technol. 2017, 22, 3147–3165. [Google Scholar] [CrossRef]
  31. Tai, J.; Ajjawi, R.; Boud, D.; Dawson, P.; Panadero, E. Developing evaluative judgement: Enabling students to make decisions about the quality of work. High. Educ. 2018, 76, 467–481. [Google Scholar] [CrossRef] [Green Version]
  32. Lo, K.-W.; Yang, B.-H. Development and learning efficacy of a simulation rubric in childhood pneumonia for nursing students: A mixed methods study. Nurse Educ. Today 2022, 119, 105544. [Google Scholar] [CrossRef]
  33. Martell, K. Assessing student learning: Are business schools making the grade? J. Educ. Bus. 2007, 82, 189–195. [Google Scholar] [CrossRef]
  34. Brookhart, S. How to Create and Use Rubrics for Formative Assessment and Grading; ASCD: Alexandria, VA, USA, 2013. [Google Scholar]
  35. Popham, W. What’s Wrong—and What’s Right—With Rubrics. Educ. Leadersh. 1997, 55, 72–75. [Google Scholar]
  36. Stevens, D.; Levi, A. Leveling the field: Using Rubrics to achieve greater equity in teaching and grading. Essays Teach. Excell. 2005, 17, 1. [Google Scholar]
  37. Chen, P.; Partington, D. Three conceptual levels of construction project management work. Int. J. Proj. Manag. 2006, 24, 412–421. [Google Scholar] [CrossRef]
  38. Wawak, S.; Woźniak, K. Evolution of project management studies in the XXI century. Int. J. Manag. Proj. Bus. 2020, 13, 867–888. [Google Scholar] [CrossRef]
  39. Albert, M.; Balve, P.; Spang, K. Evaluation of project success: A structured literature review. Int. J. Manag. Proj. Bus. 2017, 10, 796–821. [Google Scholar] [CrossRef]
  40. Vukomanović, M.; Young, M.; Huynink, S. IPMA ICB 4.0—A global standard for project, programme and portfolio management competences. Int. J. Proj. Manag. 2016, 34, 1703–1705. [Google Scholar] [CrossRef]
  41. Carstensen, A.-K.; Bernhard, J. Design science research–a powerful tool for improving methods in engineering education research. Eur. J. Eng. Educ. 2019, 44, 85–102. [Google Scholar] [CrossRef] [Green Version]
  42. Dresch, A.; Lacerda, D.; Miguel, P. Uma análise distintiva entre o estudo de caso, a pesquisa-ação e a design science research. Rev. Bras. Gestão Negócios 2015, 17, 1116–1133. [Google Scholar]
  43. Takeda, H.; Veerkamp, P.; Yoshikawa, H. Modeling design process. AI Mag. 1990, 11, 37. [Google Scholar]
  44. Cronbach, L. Coefficient alpha and the internal structure of tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef] [Green Version]
  45. Souza, A.C.; Alexandre, N.; Guirardello, E. Propriedades psicométricas na avaliação de instrumentos: Avaliação da fiabilidade e validade. Epidemiol. Serviços Saúde 2017, 26, 649–659. [Google Scholar] [CrossRef]
  46. Polit, D.; Beck, C. The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Res. Nurs. Health 2006, 29, 489–497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Shrotryia, V.; Dhanda, U. Content validity of assessment instrument for employee engagement. Sage Open 2019, 9, 2158244018821751. [Google Scholar] [CrossRef] [Green Version]
  48. Wroe, E.; McBain, R.; Michaelis, A.; Dunbar, E.; Hirschhorn, L.; Cancedda, C. A novel scenario-based interview tool to evaluate nontechnical skills and competencies in global health delivery. J. Grad. Med. Educ. 2017, 9, 467–472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Zarour, K.; Benmerzoug, D.; Guermouche, N.; Drira, K. A systematic literature review on BPMN extensions. Bus. Process Manag. J. 2019, 26, 1473–1503. [Google Scholar] [CrossRef]
  50. PMI. A Guide to the Project Management Body of Knowledge PMBOK®GUIDE, 7th ed.; Project Management Institute, Inc.: Newtown Township, PA, USA, 2021. [Google Scholar]
  51. Jain, S.; Angural, V. Use of Cronbach’s alpha in dental research. Med. Res. Chron. 2017, 4, 285–291. [Google Scholar]
  52. Marinho-Araujo, C.; Rabelo, M. Avaliação educacional: A abordagem por competências. Avaliação Rev. Avaliação Educ. Super. (Campinas) 2015, 20, 443–466. [Google Scholar]
Figure 1. ICB elements: Competences, indicators, and key measures.
Figure 1. ICB elements: Competences, indicators, and key measures.
Education 12 00902 g001
Figure 2. Rubric Development Process.
Figure 2. Rubric Development Process.
Education 12 00902 g002
Figure 3. BPMN Process Diagram for the definition of the assessment criteria for ICB competences.
Figure 3. BPMN Process Diagram for the definition of the assessment criteria for ICB competences.
Education 12 00902 g003
Figure 4. Assessment Criteria (Indicators and Key measures).
Figure 4. Assessment Criteria (Indicators and Key measures).
Education 12 00902 g004
Table 1. Steps for the development of rubrics [12].
Table 1. Steps for the development of rubrics [12].
#Steps
1.Identification of the learning objectives and identification of qualities (criteria)
2.Identification of levels of performance
3.Development of separate descriptive scoring schemes
4.Obtaining feedback on the rubrics developed
5.Revision of rubrics
6.Testing the reliability and validity of the rubrics
7.Pilot testing of the rubrics
8.Using the results of the pilot test to improve the rubrics
Table 2. ICB Individual Project Management Competence—IPMA [3].
Table 2. ICB Individual Project Management Competence—IPMA [3].
Practice Focused
Competences (13)
People Focused
Competences (10)
Perspective Focused
Competences (5)
Project designSelf-reflection and self-managementStrategy
Requirements and objectivesPersonal integrity and reliabilityGovernance, structures, processes
ScopePersonal communicationCompliance, standards, regulation
TimeRelationships and engagementPower and interest
Organisation and informationLeadershipCulture and values
QualityTeamwork
FinanceConflict and crisis
ResourcesResourcefulness
ProcurementNegotiation
Plan and controlResults orientation
Risk and opportunity
Stakeholders
Change and transformation
Selection and balancing
Table 3. Scoring Scale for a Rubric.
Table 3. Scoring Scale for a Rubric.
Performance ScaleLevelEvaluation
Inadequate 1evaluation ≤ 1.0 points
Lower than expected 21 < evaluation ≤ 2.0 points
Satisfactory32 < evaluation ≤ 3.0 points
Good43 < evaluation ≤ 4.0 points
Excellent54 < evaluation ≤ 5.0 points
Table 4. Indicators and key measures of the Leadership rubric according to the ICB-IPMA.
Table 4. Indicators and key measures of the Leadership rubric according to the ICB-IPMA.
IndicatorsKey Measures
I1. Initiate actions and proactively offer help and advice1.1 Proposes or exerts actions; 1.2 Offers unrequested help or advice; 1.3 Thinks and acts with a future orientation (i.e., one step ahead); 1.4 Balances initiative and risk.
I2. Take ownership and show commitment2.1 Demonstrates ownership and commitment in behaviour, speech and attitudes; 2.2 Talks about the project in positive terms; 2.3 Rallies and generates enthusiasm for the project; 2.4 Sets up measures and performance indicators; 2.5 Looks for ways to improve the project processes; 2.6 Drives learning.
I3. Provide direction, coaching and mentoring to guide and improve the work of individuals and teams3.1 Provides direction for people and teams; 3.2 Coaches and mentors team members to improve their capabilities; 3.3 Establishes a vision and values and leads according to these principles; 3.4 Aligns individual objectives with common objectives and describes the way to achieve them.
I4. Exert appropriate power and influence over others to achieve the goals4.1 Uses various means of exerting influence and power; 4.2 Demonstrates timely use of influence and/or power; 4.3 Perceived by stakeholders as the leader of the project or team.
I5. Make, enforce and review decisions5.1 Deals with uncertainty; 5.2 Invites opinion and discussion before decision-making in a timely and appropriate fashion; 5.3 Explains the rationale for decisions; 5.4 Influences decisions of stakeholders by offering analyses and interpretations; 5.5 Communicates the decision and intent clearly; 5.6 Reviews decisions and changes decisions according to new facts; 5.7 Reflects on past situations to improve decision processes.
Table 5. An example of the two versions of the descriptive scoring schemes.
Table 5. An example of the two versions of the descriptive scoring schemes.
VersionInadequate (1)Lower than expected (2)Satisfactory (3)Good (4)Excellent (5)
“Version 1”Proactivity, help and advice are not adequate, as the initiative to propose or carry out actions, including offering help and advice, is lacking. Does not show anticipatory thinking about situations. Initiatives are not balanced in terms of their pros and cons.Proactivity, help and advice are lower than expected in that there is little initiative to propose or carry out actions, including offering help and advice. Demonstrates poor anticipatory thinking about situations. Demonstrates difficulty in balancing initiatives, taking into account their pros and cons.Proactivity, help and advice are satisfactory in that initiatives are partially developed to propose or carry out actions, including offering help and advice. Demonstrates effort in anticipatory thinking of situations. Demonstrates balancing some initiatives, taking into account their pros and cons.Proactivity, help and advice are good, in that, overall, initiatives are shown to propose or carry out actions, including offering help and advice. Demonstrates some anticipatory thinking about situations. Demonstrates balancing initiatives and risks well, taking into account their pros and cons.Proactivity, help and advice are excellent as initiatives are shown to propose or take action, including offering help and advice. Demonstrates excellent anticipatory thinking of situations. Demonstrates exceptional balancing of initiatives and risks, taking into account their pros and cons.
“Version 2”The demonstration of this indicator and its measures is not adequate.The demonstration of this indicator and its measures is lower than expected. The demonstration of this indicator and its measures is satisfactory.The demonstration of this indicator and its measures is good.The demonstration of this indicator and its measures is excellent.
Table 6. Results of the Assessment Criteria of a Rubric.
Table 6. Results of the Assessment Criteria of a Rubric.
Indicators(−)( )(+)
I1. Initiate actions and proactively offer help and advice 14
I2. Take ownership and show commitment 23
I3. Provide direction, coaching and mentoring to guide and improve the work of individuals and teams 14
I4. Exert appropriate power and influence over others to achieve the goals 14
I5. Make, enforce and review decisions 14
Table 7. Content Validation Index.
Table 7. Content Validation Index.
QuestionProfessional 1Professional 2Professional 3Professional 4Professional 5nI-CVI
14555451
25545551
35545551
44535440.8
54335430.6
64435540.8
74534440.8
84434540.8
S-CVI 0.85
Table 8. Rubric for Assessment of Leadership Competences—Final Version.
Table 8. Rubric for Assessment of Leadership Competences—Final Version.
(1)(2)(3)(4)(5)
IndicatorsThe demonstration of this indicator and its measures is not adequate.The demonstration of this indicator and its measures is lower than expected. The demonstration of this indicator and its measures is satisfactory.The demonstration of this indicator and its measures is good.The demonstration of this indicator and its measures is excellent.
I1—Initiate actions and proactively offer help and advice
I2—Take ownership and show commitment
I3—Provide direction, coaching and mentoring to guide and improve the work of individuals and teams
I4—Exert appropriate power and influence over others to achieve the goals
I5—Make, enforce and review decisions
Indicators and Key measures: I1—Initiate actions and proactively offer help and advice: 1.1 Proposes or exerts actions; 1.2 Offers unrequested help or advice; 1.3 Thinks and acts with a future orientation (i.e., one step ahead); 1.4 Balances initiative and risk. I2—Take ownership and show commitment: 2.1 Demonstrates ownership and commitment in behavior, speech, and attitudes; 2.2 Talks about the project in positive terms; 2.3 Rallies and generates enthusiasm for the project; 2.4 Sets up measures and performance indicators; 2.5 Looks for ways to improve the project processes; 2.6 Drives learning. I3—Provide direction, coaching, and mentoring to guide and improve the work of individuals and teams: 3.1 Provides direction for people and teams; 3.2 Coaches and mentors team members to improve their capabilities; 3.3 Establishes a vision and values and leads according to these principles; 3.4 Aligns individual objectives with common objectives and describes the way to achieve them. I4—Exert appropriate power and influence over others to achieve the goals: 4.1 Uses various means of exerting influence and power; 4.2 Demonstrates timely use of influence and/or power; 4.3 Perceived by stakeholders as the leader of the project or team. I5—Make, enforce, and review decisions: 5.1 Deals with uncertainty; 5.2 Invites opinion and discussion before decision-making in a timely and appropriate fashion; 5.3 Explains the rationale for decisions; 5.4 Influences decisions of stakeholders by offering analyses and interpretations; 5.5 Communicates the decision and intent clearly; 5.6 Reviews decisions and changes decisions according to new facts; 5.7 Reflects on past situations to improve decision processes.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Souza, M.; Margalho, É.; Lima, R.M.; Mesquita, D.; Costa, M.J. Rubric’s Development Process for Assessment of Project Management Competences. Educ. Sci. 2022, 12, 902. https://doi.org/10.3390/educsci12120902

AMA Style

Souza M, Margalho É, Lima RM, Mesquita D, Costa MJ. Rubric’s Development Process for Assessment of Project Management Competences. Education Sciences. 2022; 12(12):902. https://doi.org/10.3390/educsci12120902

Chicago/Turabian Style

Souza, Mariane, Élida Margalho, Rui M. Lima, Diana Mesquita, and Manuel João Costa. 2022. "Rubric’s Development Process for Assessment of Project Management Competences" Education Sciences 12, no. 12: 902. https://doi.org/10.3390/educsci12120902

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop