Next Article in Journal
Addendum: Lehmann, T. et al. Cluster Policy in the Light of Institutional Context—A Comparative Study of Transition Countries. Adm. Sci. 2015, 5, 188–212
Previous Article in Journal
Employer Branding Implementation and Human Resource Management in Greek Telecommunication Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Simulation to Selecting Project Strategy for Autonomous Research Projects at Public Universities

1
Department of Management Systems and Organisation Development, Wroclaw University of Science and Technology, Wrocław 50-370, Poland
2
Faculty of Management, General Tadeusz Kościuszko Military University of Land Forces, Wrocław 51-147, Poland
*
Author to whom correspondence should be addressed.
Adm. Sci. 2020, 10(1), 18; https://doi.org/10.3390/admsci10010018
Submission received: 22 December 2019 / Revised: 12 March 2020 / Accepted: 17 March 2020 / Published: 23 March 2020

Abstract

:
The definition of the success of research projects implemented at public universities is far from being unequivocal. The success of a research project has to be in line with both the public university’s and the funding institution’s policies, as well as with the personal objectives and ambitions of the researchers. Once the success definition for the research project has been determined, the strategy of implementation must be defined. The omission of this step may result in effort lost (public money, time, enthusiasm, etc.) after being directed toward objectives which do not fit with either the public university’s or the funding agency’s policies, nor with the researchers’ personal objectives. This paper discusses this problem and proposes a model where simulation is used to choose the project strategy that best fits the selected research project success definition in the context of the policy of a given public university, the preferences of its researchers, and the policy of the funding agencies. The model is illustrated by means of a case study—a real world research project implemented at a public university in a European country, where the policy of subsidizing public universities has been changing both very regularly and in a highly unpredictable manner. It is shown how various project strategies can lead to multiple project outcomes, which are then evaluated in different ways depending on the point of view of public university policy or of the researchers, the funding agencies, and/or society. The main conclusion is that applying simulation to a research project before it starts may contribute significantly to the optimization of time, effort, and resource usage with the objective of project success maximization in the context of public university policy and the objectives of the researchers.

1. Introduction

Expenditures on research projects (the definition of this notion is discussed later) are huge. For example, in higher education institutions in the USA, $71.8 billion was spent on such projects in 2016 (Higher Education Research and Development Survey (HERD) 2018). The above number does not include research expenditures in industry, but is especially important as it represents, to a great extent, public expenditures paid by society. Society would naturally like to spend the money efficiently in order to achieve the highest value of research results per monetary unit spent, especially at public universities subsidized with public money. The literature delivers numerous testimonies that lead to doubts as to the effects and usefulness of research projects for society (e.g., Klaus-Rosińska 2019; Betta et al. 2017).
However, society is not the only actor with certain expectations regarding which research project teams are evaluated. Research project teams are accountable to their superiors, and in many countries (where a centralized higher education evaluation system exists) to governmental unities, to other (often competing) researchers, and last but not least, to themselves; each researcher has their own personal goals and ambitions (Kuchta et al. 2017).
Generally—and this subject is developed later in the paper—the notion of research project success is very ambiguous. Assessing whether a research project is successful or not is a multicriteria problem in which neither the criteria nor the method of linking or aggregating them are set in a universal way. The environmental setting of the project, among other factors, raises the question of whether it is implemented at a public university and how it is financed, and plays an important role here.
The way a research project is defined, planned, managed, and controlled can have a strong influence on its success. This statement, seemingly a truism, is proved later in the present paper. To direct a researcher’s efforts (in terms of project definition, planning, managing, and controlling) toward project success, the research project manager must be aware of how he or she, the parent organization (e.g., a public university), and the funding institution define project success; where he or she is heading; and which “successes” are possible (or feasible) in the context of project implementation in its specific environment, given the resources and possibilities the project has at its disposal.
The objective of this paper is to propose a model and a tool to choose a strategy (this term is defined later on) for research project implementation, ensuring the achievement of the selected project success criteria (selected by the decision-maker and taking into account his or her personal goals and the policy of the parent organization, especially a public university, and of the funding agency), which—aggregated in a way also selected by the decision-maker—satisfy the decision-maker to the highest degree, given the circumstances and the environment in which the project is implemented. Here, we consider a “goal programming” approach to aggregating project success criteria, but other approaches (like Pareto optimality, etc.) could be introduced without any significant changes to the model. In a special way, the situation of research projects implemented at public universities are taken into account.
The tool used is a simulation model, so that the decision-maker is able to simulate various feasible project strategies and to evaluate the success of the project according to the selected success criteria aggregated in a selected way. Among all of the feasible strategies, he or she is able to choose the one that leads to the most desirable success (among the feasible successes), defined by the aggregated values of the success criteria, and he or she might also take into account other strategy features like cost, risk, or difficulties linked to their implementation.
The main novelty of this approach lies in the fact that, to our knowledge, there exists no published scientific results concerning the problem of selecting project strategy with respect to understanding project success for research projects.
The model and the tool are illustrated by means of a case study—a real world research project implemented at one of the largest Polish universities, where one author held the role of project manager. The project has already been terminated, and thus the actual scenario is known. However, the project run is simulated as if the project has not yet been started, and the considered scenarios are based on the actual scenarios that were feasible for the project at the time that it was planned. Also, the parameters for the simulation are the results of the estimation process based on the actual values for this project, as well as on similar research projects in which the authors participated.
The outline of the paper is as follows. In Section 2, research projects are defined; their definitions for the need of this paper are determined; the problems of research project success and its vagueness are discussed; the relationship between research project success and research project stakeholders is established; and the research project environment, with a special emphasis placed on public universities, is discussed. The types of research project stakeholders are also presented, including governmental and other public units. In Section 3, the term “project strategy” as it is understood in this paper is presented—for projects in general, not just for research projects. In Section 4, the conceptual model for selecting a research project strategy in a given environment and for a given understanding of project success is proposed, with special emphasis placed on research projects implemented at public universities. In Section 5, the state-of-the-art usage of simulation in project management is presented. In this section, projects in general—and not only research projects—are considered, as (to our knowledge) simulation has hardly been applied to research projects thus far. In Section 6, the simulation method and tool used in the paper are presented and a simulation model is applied to the case study, a real-world research project implemented at a Polish public university, described in the same section. This paper concludes with a discussion of the results and further research perspectives.

2. Research Projects

2.1. Research Project Definition and Features

As defined by the Project Management Institute, a project, in general, is “a temporary endeavor undertaken to create a unique product, service, or result” (Kerzner 2005). A project is a temporary organization within its parent organization, where the term organization is understood as “a series of interlocking routings, habituated action patterns that bring the same people around the same activities in the same time and places” (Jordan et al. 2005).
A research project, in turn, is “a temporary set of activities … to fulfil scientific discovery and production of new knowledge or to achieve certain system tools, to meet the expectations of the business environment (product or services); it includes any scientific research in science, technology, and systems at any level of the organizational levels” (Forozandeh et al. 2018). In (Jordan et al. 2005), we can find various classifications of research projects, among others, in two basic categories:
  • Narrow scope of focus: small, autonomous projects;
  • Broad scope of focus: large, coordinated programs.
Here, we do not consider category B: projects realized in consortia. We focus on category A: small research projects implemented within a single parent organization. From now on, the term “research projects” is to be tacitly understood as research projects from category A; thus, as small, autonomous projects implemented by one institution, assumed to be a public university. The term “autonomous” means that the project can define its own strategy, taking into account the strategy of the parent organization (i.e., the public university) and modifying or expanding it according to the objectives and possibilities defined by the project team. This term is discussed again in Section 3. The term “small” is used in this paper rather intuitively, as a fuzzy synonym of “one institution project.”
Research projects are realized in teams composed of researchers, as well as possible supporting staff. Researchers cooperate in networks of various structures (Clemente-Gallardo et al. 2019), which are largely independent of the research institution in which they are placed. An important feature of teams on research projects is the mentality of the research workers; given the way they are evaluated (to a large extent based on bibliometric indicators), they compete with their colleagues (Betta et al. 2017; Garcia and Sanz-Menéndez 2004), and are often more focused on their personal goals than on those of the project as a whole, which often makes team work and project implementation difficult (Ghazinejad et al. 2018). This fact also means that research projects are always, to a certain degree, autonomous, and thus have to be able to define their success criteria and strategy while taking into account the policy of the parent organization, e.g., the public university. This means, for example, that if the public university is evaluated (and subsidized) by the government according to a certain algorithm, the research projects implemented at the university cannot determine their objectives by completely ignoring this algorithm, but can add their own objectives chosen by the researchers. Examples of this are given in this paper.

2.2. Research Project Success

Project success (not just for research projects, but for projects in general) can be, and is, defined in the literature in many different ways. A recent summary of research on project success can be found in (Martens et al. 2018). Project management success should be distinguished—usually, it is suggested that project management is successful if the project meets the specification (scope), cost (budget), and time (deadline) requirements, the so-called Iron Triangle Model. On the other hand (de Wit 1988), project success is related to the goals and benefits, as a whole, that are provided by the project. In (Söderlund 2008), we can find various dimensions of project success, including preparation for the future, which emphasizes the need for a longer horizon for project success understanding, and impact on customers and on the team, which refers to special cases of relating project success to the satisfaction of various project stakeholders (Davis 2014). A project stakeholder is defined as “a person or a group of persons who are influenced by or able to influence the project” (Srinivasan and Dhivya 2019). Stakeholders fall into one of two categories: internal and external stakeholders. Internal stakeholders are directly involved in the decision-making process of the organization in which the project is located (e.g., customers, owners, suppliers, or employees), and external stakeholders are people who are affected by the project’s activities (e.g., the general public, local community, or local authorities) (Srinivasan and Dhivya 2019).
For research projects, the Iron Triangle Model is important, especially in reporting, accounting for, and evaluating projects—which is often, especially in the case of public projects, performed according to the requirements of some of the governmental units that distribute public research funds. Thus, quite often, research project managers are forced to concentrate on the Iron Triangle criteria. However, the three Iron Triangle criteria are not sufficient. Research activities in general, and, consequently, research projects, are very difficult to close in a fixed, precise framework of quantitative, universal criteria. For researchers, as well as for society, a positive formal acceptance of a project report by a public unit is not sufficient. A recent thorough study on research projects in two European countries (Klaus-Rosińska 2019) shows clearly, using numerous opinions of research project managers, how equivocal and fuzzy the notion of research success is. Its understanding depends strongly on the stakeholders (discussed more deeply in the next section). A selection of various success criteria for research projects is given in the following list (Eilat et al. 2008; Yuan and Huang 2002; Revilla et al. 2003; Despotis et al. 2015):
Quantitative criteria:
  • the discounted cash flow generated by the project;
  • the number of team members trained in project management, thanks to the project realization;
  • the probability of technological and commercial success of the project product;
  • new scientists gained by the organization, thanks to the project;
  • the total income generated by the project;
  • the number of patents and copyrights gained, thanks to the project;
  • the number of papers published, thanks to the project (possibly weighted by journal classification);
  • the number of citations generated thanks to the project (possibly weighted by journal classification);
  • the number of dissertations, thanks to the project;
  • the number of reports issued, thanks to the project;
  • the number of technology innovations, thanks to the project;
  • the number of seminars organized, thanks to the project;
  • the number of technology transfers resulting from the project.
Qualitative criteria:
  • the performance improvement achieved, thanks to the project;
  • customer satisfaction with the product of the project;
  • the congruence with the strategy of the organization realizing the project;
  • synergy with other projects realized by the organization;
  • project team satisfaction;
  • the technical gap size covered by the project product;
  • the newness of the technology used;
  • the complexity of market activities needed to commercialize the project product.
Some of the above success criteria are of a quantitative nature, and others are qualitative (often subjective). Some can be measured in the moment of project termination, and some have to wait a certain amount of time after the project termination to be evaluated. Some are important for the project team, some for the organization (e.g., a public university, depending on both its own and governmental policy) where the project is implemented, and some for society in general or specific sections of society (or even for humanity). To sum up, the problem of evaluating the success of a research project is complex; it involves many highly diversified criteria which are perceived by different actors in different ways.

2.3. Research Project Environment

Project environment (Artto et al. 2008) refers to “the world outside the project boundaries with which a project must continuously interact.” It comprises the parent organization and other project stakeholders. A list of potential research project stakeholders can be deduced from the literature, e.g., (Skorupka et al. 2016; Tarantola et al. 2007):
  • the members of the project team;
  • the project manager;
  • the accounting department of the organization;
  • the project management department of the organization;
  • the financial manager of the organization;
  • the scientific manager of the department/university;
  • organization(s) and sponsors providing funding for the study through contracts, grants, or donations;
  • volunteers or respondents who have consented to participate in experiments or questionnaires;
  • potential beneficiaries of the results (e.g., hospitals, patients in health-related projects, etc.).
This list is by no means exhaustive. Stakeholder identification and analysis methods must be used for each individual project (e.g., Eskerod and Larsen 2018). As mentioned in Section 3, a thorough understanding of research project stakeholders is vital for defining, understanding, and controlling research project success. For a research project implemented at a public university, the most important stakeholders are the government or its respective units that evaluate projects and universities, the university managers of various levels, the research team, the auxiliary university departments of the university, the bibliometric services, the funding agencies, etc. These stakeholders have a strong influence on the financial means that the public university has at its disposal.

3. Project Strategy

In this section, apart from in the very last paragraph, we consider projects in general, not just research projects. Project strategy is a notion that is defined in the literature in various ways. Before we move onto the various definitions of this term and the selection of a definition for the sake of this paper, let us discuss three basic types relations between project strategy and the organization in which the project is implemented (Artto et al. 2008).
  • Projects can be subordinated to the project organization. Here, project strategy is derived from the more significant business strategies of the parent organization and usually consists of a static plan and predefined goals.
  • Projects can be autonomous organizations connected more loosely or tightly to the parent organization. In this case, projects develop their own strategies and plans, largely independent of the surrounding organizational context.
  • Projects may also be organizations that are not subjected to any clearly defined governance or authority setting in relation to an organization. This category refers mostly to large projects implemented in consortia.
It seems that research and development projects, besides the large ones implemented in consortia, belong to the second category. This is true above all for research projects implemented in single universities and research institutions. The independence of researchers mentioned in Section 2.1 and the fact that universities and research institutions do not necessarily concentrate on strict business goals allows researchers to develop their strategy more or less independently of the organization that they work for—taking into account the most important objectives of the organization, such as bibliometric indicators or the number and size (measured by the budgets) of research projects, as well as other data taken into account in algorithms evaluating public universities.
In (Aaron et al. 2011), we find the following definition of project strategy:
Definition 1.
Project strategy is the project perspective (i.e., the answer to the question “why”—business background, business objective, strategic concept), the project position (i.e., the answer to the question “what”—product definition, competitive advantage, success criteria), and the project guidelines (i.e., the answer to the question “how”—project definition, including project plan, and strategic focus).
In (Artto et al. 2008), we find another definition of project strategy, which we adopt in this paper:
Definition 2.
Project strategy is a direction in a project that contributes to the success of the project in its environment.
One term used in Definition 2 is further defined by the authors of the same paper:
Definition 3.
Directions in a project are explicit elements of the project strategy.
Definition 2 is more focused on the “how” found in Definition 1. The paper (Artto et al. 2008) enumerates the following elements of the project strategy: goals, plans, guidelines, means, methods, tools, governance systems, reword and penalty schemes, measurement, and other controlling devices. In (Kozarkiewicz 2016), we can find the following elements of project strategy (this list has been elaborated based on research among practitioners in research project management in one European country):
  • defining the project’s goals, performance objectives, or performance targets;
  • defining the success of the project;
  • the scope of the project’s changes, including the principles of permitting the initiation of the whole project or its consecutive stages;
  • fundamental decisions about the scope and the quality, including the decisions as to technologies used or suppliers selected;
  • the project’s implementation scenarios.
Research projects, as they belong to category II (defined at the beginning of this section), are largely free to choose their strategy and thus its elements. As for the project success definition, this can also be selected in various ways (as shown in Section 2.2). However, the project environment cannot be ignored, and it is necessary to identify and analyze it (see Section 2.3). For projects implemented at public universities, the university, its departments, and the respective governmental units are crucial elements of this environment. The following section treats the question of how to choose elements of project strategy in a given environment for a selected research project success definition using simulation.

4. Conceptual Model of Choosing the Strategy of a Research Project for a Given Project Success Definition in a Given Project Environment

In this section, we propose a general simulation model for choosing elements of research project strategy for a given project success definition and for a given project environment. The model is general; it is thus applicable to any research project, not only to those implemented at public universities.
Each element of the set of strategies S Ξ is itself a set of selected variants of strategy elements. Let us thus define the set of strategy elements as S E = { E i } i = 1 N , and for each E i , i = 1, … ,N, let V E i = { V j i } j = 1 N i be the set of possible variants of the given strategy element. A strategy is defined as S = { V j S i } i = 1 N , where j S { 1 , , N i } and refers to the variants of the i-th element selected for the strategy S . The process “temporary selection of strategy” in Figure 1 stands for the selection of a strategy S for trial. If, after the simulation, the outcome of success measurement is acceptable, it is assumed that the strategy has been definitely selected. If not, another strategy S is taken for trial and so on, until a strategy has been selected or all of the potential i = 1 N N i strategies S have been tried out without reaching acceptance. In the latter case, the project success measures must be changed.
The project success measurement method is selected based on the considerations presented in Section 2.2 for a given environment, defined as a set of stakeholders with their preferences, attitudes, and weights. Project success measurement is thus based on a set of success criteria S C = { C k } k = 1 M , and S C * = { C k * } k = 1 M is the set of the values the criteria takes on for the given simulation run. For each of the criteria C k , k = 1 , , M , a minimal goal value C k m i n (in the sense of goal programming—here, we assume minimum type values, but this assumption can be eliminated without any consequences) is selected (the only assumption for these values is C k m i n 0 , thus some elements of S C can be considered unimportant or irrelevant). “Project success measurement outcome” in Figure 1 is accepted if the value:
k = 1 M ( m a x ( C k m i n C k * , 0 ) )
is accepted (it should be minimized), where the terms m a x ( C k m i n C k * , 0 ) ,   k = 1 , , M are the non-desired deviations (downward from the goal value). It must be underlined that the above model is largely simplified; it disregards, for example, the problem of cost. Various strategies require various budgets. All other important strategy features, like cost, should be taken into account in a complete model. Here, it is simply assumed that Ξ is composed of feasible strategies and that the project success is the basic decision criterion.
In Section 6, the way in which the model can be used in the context of a public university is shown. This context influences the set of possible (feasible) strategies Ξ and the set of potential project success criteria S C .

5. Use of Simulation in Project Management

There exist in the literature several positions concerning the application of simulation to project management. Most of them are linked to the problem of project scheduling, under or without resource limitations, project delays, project budget, and cost (Morales and Anderson 2013; Ghomi and Ashjari 2002; Chou 2011; Wang et al. 2019; Rasmussen et al. 2017; Menipaz and Ben-Yair 2002; Kremljak et al. 2014; Kurihara and Nishiuchi 2002; Ourdev et al. 2007; Golenko-Ginzburg et al. 2003; Dale et al. 2001; Lai et al. 2008; Song et al. 2018), which are sometimes linked to risk simulation (Uzzafer 2013) or quality (Fu 2017).
Simulation has previously been applied to project risk management (Fang and Marle 2012). The simulation-based model proposed in that study makes it possible to suggest and test risk mitigation actions and then to support project managers in making decisions regarding risk response actions.
Several papers are devoted specifically to simulation applied to IT project management (e.g., Morales and Anderson 2013). In (Kouskouras and Georgiou 2007), a model simulating a core part of a software project process is described, which enables the estimation of several project development details such as delivery times and quality metrics.
There have also been attempts to simulate project management in all its complexity. For example, in (Cardona-Meza and Olivar-Tost 2017), 49 subprocesses of project management, identified according to PMBOK, form a network modeling the management process of a project, and various interrelations between them are simulated. In (Wang et al. 2017), simulation is used in the analysis of the value realized from the project.
Furthermore, there has been an attempt to apply project simulation to managing project scope and the functionalities of the project product (Artto et al. 2001). In (Sabeghi et al. 2015), we can find an application of simulation to project control; the best timing for control points is determined by means of simulation. In (Kennedy et al. 2011), the dependence between project complexity and team communication and performance is examined, also by means of simulation.
As far as research projects, in the broad sense of the term—not confined to the definition from Section 2.1, but meaning projects which are aimed at delivering new knowledge or applying this new knowledge (Kuchta et al. 2017)—are concerned, there exists an application of simulation to new product development projects (Iluz and Shtub 2015). Various scenarios of scope, time estimates, resource estimates, cost estimates, quality parameters, and risk management plans are considered and refined until a scenario (or a project strategy) is selected. To our knowledge, no other applications of simulation to research projects have been published in the scientific literature thus far.

6. Case Study

6.1. Methods and Tools Used for Simulation in the Case Study

In the simulation in this paper, we used the theory of Systems Dynamics (SD), a methodology and mathematical modeling technique, to present, understand, and analyze complex issues and problems. The founder of SD is W. Forrester, who initially used the name Industrial Dynamics (Forrester 1968). The methodological basis introduced by Forrester was then utilized in numerous domains, including population, agriculture, ecological, economic, urban/social, and management problems. The spreading of the application range was reflected in the late 1960s in the name change to SD, and selected important contributions to SD are (Senge 1997; Coyle 1998; Meadows 2009; Sterman 2018).
SD models capture the simultaneity in systems by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The basis of an SD model is a structure composed of several elements (see Figures in the subsequent part of the paper):
  • streams (flows)—which indicate movements of objects, e.g., materials, orders, staff, projects, activities, etc. (marked in the figures with double-line arrows);
  • levels—places in which the inflow is compared with the outflow and if the former exceeds the latter, accumulation of objects takes place, e.g., inventory level, staff available for employment, activities still to do, etc. (marked in the figures with text strings in rectangles);
  • decision points—regulating the flows as a function of information about the system state (marked in the figures as small triangles);
  • variables—storing the values of parameters and auxiliary variables, e.g., duration of a project, definition of the scope of work, etc. (marked in the figures as texts strings without any framing or brackets);
  • input variables—variables whose values are imported from other simulation models, here treated as parameters (marked in the figures as text strings in triangular brackets);
  • information flow direction (indicated in the figures by blue single-line arrows).
The behavior of the system is a consequence of its structure, which means that a given result is the consequence of several reasons, not a single reason. Also, various system behaviors might lead to the same result. A broader introduction to SD is presented in (Morecroft 2015) and (Martín García 2019).
Project runs are complex structures with flows of objects and information, decision points, and variables that dynamically change their values over time, and these changes produce certain intermediate results and project outcomes that define and determine project success and failure. That is why SD seems appropriate for simulating a project course for various needs. Indeed, SD was applied to project management simulation in (Majtán et al. 2014; Morales and Anderson 2013; Wang et al. 2017), but not in the context of research projects.
The use of SD in practice is supported by software with a graphic user-friendly interface. Here, we used the application Vensim, particularly its free version Vensim PLE 7.3.5 (Ventana Systems Inc, 60 Jacob Gates Rd, Harvard, MA 01451, United States).

6.2. Description of the Case Study Project

The case study project was implemented in a Polish public university. To understand the context, it has to be underlined that in Poland, research activities and researchers are evaluated basically on so-called “points” assigned to journals included in a ministerial list. There are two big problems linked to the system that functions in Poland:
  • firstly, the ministerial list and the points assigned to journals are subject to frequent changes. This means that, while submitting a paper to a journal, the researchers do not know how many points will be assigned to the publication if the paper is accepted;
  • secondly, the algorithm determining the exact evaluation figure is very complicated and unstable. At present, each researcher has to fill in four “slots,” which correspond roughly, but not exactly, to the one-author publication equivalents, and only the best-numbered four slots of each researcher are counted. However, in the moment of preparing this paper, the public universities law was in the process of being modified in a yet unknown way.
To sum up, at public universities in Poland, the points assigned to journals are extremely important for the evaluation of individual researchers and universities, but also for the number of publication counts, as it is impossible to predict which publications will be counted in the future and with how many points.
The project in question (described in more detail in (Betta et al. 2017; Klaus-Rosińska 2019)) was selected in a call for research projects announced by a governmental unit in the discipline of management. Its subjects were research projects themselves, and its goal was to identify research project success factors, defined as “those characteristics, conditions, or variables that, when properly sustained, maintained, or managed, can have a significant impact on project success” (Moohebat et al. 2010). The project was defined and described in 2008 and was executed in a year at the turn of the years 2010‒2011 by a team composed of seven researchers from University X. The project manager was one of the authors of the present paper. At the time of defining the project, we identified a research gap consisting of an incomplete and too general identification of success factors for research projects. The methods to be used (defined in the stage of project definition) comprised questionnaires, interviews, and workshops. However, after the project’s completion, we now know that the questionnaires were not very efficient (they were often filled in rapidly and superfluously without the necessary depth of reflection), and the workshops were not used because no participants were found (all of the potential participants refused, citing lack of time as the most important factor). Thus, only deepening, semi-open interviews were used to identify the success factors.
The interviews were conducted with research project managers, and each interview was a case study of one research project. The interviewees were asked about the criteria of project success they used; about the degree to which “their” project fulfilled the criteria; and about which factors—according to them—influenced that situation. Additionally, they were also asked more general questions (not necessarily referring to the case study) about their opinion on success factors in research projects generally.
The most important success criteria of the project (in the context described above) were the number of points for publications and the number of publications accepted for publication by scientific journals, both numbers referring to the period of project realization. The team tried to achieve high values of these criteria through “producing” a relatively high number of publications and sending them first to journals with a high number of points. In the case of rejection, and if there was still time, the papers were submitted to journals with lower numbers of points. In some cases, the papers faced the possibility of undergoing substantial corrections and of being resubmitted, but because of the limited time and a long waiting time for reviews, the corrected papers were sent to journals with a lower number of points.
Apart from summarizing papers, whose preparation was planned for the very end of the project, the aforementioned success criteria made the team write a paper after each sequence of interviews. These papers were case studies based on the projects that the interviews of the sequence referred to. These papers only had a chance of being accepted within the project duration. In the context of the project in question, the term “paper” always refers to the “case study type papers based on a sequence of interviews” and is distinguished from the summarizing papers, which are always named as such.
Apart from the above two success criteria, resulting from the expectations of the two most important stakeholders (the government and the university), the project manager and team had another, taking into account the third important member of project environment, society, or, in a narrower sense, the community of researchers who expected a good piece of research on research projects success factors, which might not be reached within the project duration. A good piece of research meant, in our case, well-conducted interviews, which, in the future (possibly in the summarizing papers) would appear—and in fact, this turned out to be true—possibly long after the project is terminated. As a matter of fact, the interviews leading to summarizing papers may have led, in the future, to papers published in journals which would have a high number of points. Thus, we also had in mind the success criterion “number of well-conducted interviews”, which is a measure of another success criteria, i.e., “amount of acquired knowledge about research project management”. Based on the number of available, potentially high-quality interviewees, we estimated the maximum number of interviews that it was possible to conduct, which turned out to be 60. Thus, we set the goal at 60 high-quality interviews.
As regards the other criteria (the number of points and the number of papers), the goal values can be set arbitrarily by the decision-maker. The objective was to maximize these values (in the sense of goal programming), knowing that the number of papers submitted for publication during project realization was limited by the number of interviews divided by the length of the sequence of interviews required to write a paper. In the considered case, the average length of this sequence was three, so a maximum of 20 papers could be produced.
Let us underline that the papers were produced once the sequence of interviews was finished, usually without controlling (apart from extreme, striking cases) the interview quality. Insufficient interview quality was discovered only later, usually in the process of paper preparation or, even more frequently, in paper reviews.
Using the notation from above, for the case study project we had:
  • Project total length: 1 year.
  • The length of interviews sequence (without taking into account interviews quality) necessary for a paper to be produced: 3.
  • The maximal number of potential high-quality interviewees: 60.
  • The set of journals to which the papers were submitted (with waiting times being random variables whose distribution is estimated basing on experience):
    • 100-point journals
    • 50-point journals
    • 20-point journals
  • The set of project team potential member types:
    • BRs: basic researchers (some years of experience after PhD, a fairly high number of points for publications and/or citation index);
    • ERs: experienced researchers (full professors, a high number of points for publications and/or citation index);
    • URs: unexperienced researchers (beginners in research, rudimentary number of points for publications).
It is assumed that ERs are the most experienced and thus most prone to quickly producing high-quality interviews, but they have numerous additional activities (journal and conference paper reviewing, applications for degree reviewing, conference key speeches, work with PhD students and young researchers, etc.), and are more likely to generate absences, thus not being able to conduct an interview at a fixed date. At the other extreme, we have URs, who would probably need a longer time to prepare and conduct an interview and would make more mistakes, but would be more readily available. In the middle fall BRs, who form an in-between category.
Using the notation from Section 4, we have: S E = { E 1 , E 2 } , where E 1 is the “selecting members of the project team,” and E 2 is the “selecting policy of paper submissions.” Both elements of the project strategy have the following potential variants:
  • V E 1 = { V j 1 } j = 1 4 , where V 1 1 refers to the composition of the project team of four BRs; V 2 1 refers to composing the project team of four URs; V 3 1 refers to the composition of the project team of four ERs; V 4 1 refers to the composition of the project team of four BRs supported by two URs and two ERs.
  • V E 2 = { V j 2 } j = 1 1 , where V 1 2 refers to the publications that will be first submitted to journals guaranteeing a higher number of points (here, 100). In the case of rejection, they will be submitted, possibly after some corrections, to journals with a lower number of points. We assume here a one-element set for the sake of simplicity.
S C is assumed to be equal to { C k } k = 1 3 , where:
  • C 1 : The number of terminated and high-quality interviews (according to the information available within the project duration)—the goal value C 1 m i n is 60.
  • C 2 : The number of papers accepted and published within the project duration—various goal values C 2 m i n are considered.
  • C 3 : The number of points for publications generated within the project duration—various goal values C 3 m i n are considered.
We utilized the goal programming approach (in its simplest form, for reasons of simplicity, disregarding such problems as the lack of goals commensurability) for the evaluation of the multicriteria programming problem of project success with goal values of the maximum type. Thus, the following objective function should be minimized:
k = 1 3 ( m a x ( C k m i n C k * , 0 ) )

6.3. Simulation Model and Experiments—Selecting Research Project Strategy

For the experiment presented in this section, the research project described in Section 6.2 was used. Of course, as mentioned previously, the project has already been terminated, and as it was conducted according to one scenario, other scenarios cannot possibly be tried out in reality. Thus, the project, in a simplified form, is used here merely as a basis for simplified experiments, whose goal is to show the potential usefulness of simulation for selecting research project strategy according to Definition 2.
First, we concentrate on the success criterion C 1 and the strategy element E 1 .

6.3.1. Selection of Project Team Members and Its Influence on Success Criterion C 1

We used here the following simulation model:
In the model from Figure 2, we see the Initial Project Definition, where, among others, the information about project duration, the goal number of interviews, and the composed team (strategy element E 1 ) is contained. The choice of the team implies information about Weighted Absence, i.e., the number of times the team members are unexpectedly not available to conduct an interview at a scheduled time. This parameter determines the Rework Gen(eration) Rate and creates the Level of Rework, which stands for the non-conducted interviews which have to be scheduled for a later date. The absences are discovered by the project manager after a period of time called Time to Detect Absence, and if there is still time (i.e., if the project has not terminated yet), the respective interviews are rescheduled. Absences which have not been made up for (the respective interviews have not been conducted at a later date) become Remaining Rework.
The Level of Losses is the cumulated number of interviews that have not been performed at the required level of quality and which have been identified as such. The cumulated number of low-quality interviews whose low quality has not been discovered become Remaining Losses. The number of Level of Losses plus Remaining Losses is a consequence of the composition of the project team and is determined by Weighted Shortcoming and a connected parameter, Rate of Losses. Additionally, the “bad” interviews are not identified immediately; it usually happens later, in the process of paper writing or reviewing. The time after which low-quality interviews are discovered is called here the Time to Detect Losses.
The Full Employment Performance Rate corresponds to the number of interviews of a high quality that the project team is capable of conducting during the project duration. It is determined both by planned absences (the project team member cannot be scheduled for certain interviews because of a lack of free time in his or her calendar) and the productivity and experience.
Work To Do is the current number of interviews scheduled in the initial project plan corrected by the Full Employment Performance Rate and then increased by the Level of Rework and the Level of Losses. Work Done is the current number of interviews completed (without confirmation of a sufficient quality level). It is also, at project termination, the value of the success criterion C 1 . Stream of Work relates to the interviews directed at the process of publication preparation.
The four possible choices of E 1 were simulated and the following results were obtained.
If V 1 1 is selected (project team composed of four BRs), the Vensima system generates the results presented in Figure 3a–c. The “x” axis of Figure 3a–c represents the whole project duration (one year, i.e., 52 weeks) and the “y” axis the number of interviews represented by the respective variables. The most important values are those attained at the end of the project duration.
In Figure 3a, we can see that because of the lack of experience and a certain absence problem, for the four BRs, it was only possible to perform 50% of the interviews included in the project goal, i.e., 30 (the final value of Work Done). In Figure 3b, we can see the accumulated number of absences (Level of Rework) and the accumulated number of absences that have not been made up for (Remaining Rework). The Remaining Rework here is very low, close to zero, as we assumed in our model that Time to Detect Absence is zero. In Figure 3c, we can see the accumulated number of identified low-quality interviews (Level of Losses—two at project completion) and the accumulated number of low-quality interviews that have not been identified (Remaining Losses—one at project completion). It can be noticed that from week 40 onwards, the number of Remaining Losses starts to decrease, as the losses start to be discovered (after the Time to Detect Losses, assumed to be 20 weeks) and there are no new interviews.
To sum up this choice of project team members (four BRs), the value of the criterion C 1 would be 27 (30 interviews conducted minus three of a low quality—two discovered ones and one undiscovered one).
The second strategy variant as to the choice of the project team, V 2 1 (four URs), provides the presented in Figure 4a–c.
Because of lower experience, a team composed of four URs performed only 24 interviews, out of which four were of low quality and were identified prior to project termination (the final value of Level of Losses) and two remained undiscovered until the end of the project (Remaining Losses). Thus, the criterion C 1   value is 24 − 6 = 18. The low quality of interviews and their gradual detection can be seen in the descending last segment of Work Done. This team has one advantage: a low absence rate, which is seen in the low values of the Level of Rework and Rework.
Let us now consider V 3 1 , i.e., the project team composed of four ERs. The results are presented in Figure 5a–c.
Here, we can see the case of the most experienced workers (four of them) forming the project team. Because of a high planned absence, they were only able to perform about 38 interviews. However, in their case, the quality problem is negligible: at the end of the project (the third picture) were only two low-quality interviews, out of which “less than one” (this “fractional” result is due to the continuous character of the model) remained undiscovered until project’s termination. Thus, we can approximate the value of the criterion C 1 to be 38.
The last scenario of project team we considered was a mixture V 4 1 , composing a project team of four BRs supported by two URs and two ERs (Figure 6a–c). Here, we obtain the highest value of the criterion in question.
This team was able to conduct almost 60 interviews. The relatively high absences have been made up for. There were seven low-quality interviews, out of which “more” than two remained undiscovered at project termination. The value of the success criterion C 1 is thus 53.
In the following subsection, we integrate the other success criteria: C 2 and C 3 . To simplify the presentation, we assume here the best team scenario from the point of view of success criterion C 1 : four BRs supported by two URs and two ERs. In this case, almost 60 interviews were conducted, and all of them were directed to paper preparation. For the sake of simplicity, we assume in the next section a rounded-up number of interviews: 60.

6.3.2. Selection of Article Sending Patterns and Their Influence on Success Criteria C 2 and C 3

The next step is to integrate objectives concerning the publications, thus C 2 and C 3 . The process of publishing and reviewing seems to be of a different nature than the continuous problem of scheduling and controlling interviews. The feedback loops we used in Figure 2 seem to be of less importance, as papers are submitted in discrete time moments and the outcomes of reviews resemble more a lottery than a network of causes and consequences. That is why, for this part, we chose a discrete and random model.
The connector shown in Figure 7 is formed from the Stream of Work, i.e., the interviews directed to the process of paper preparation, corrected by the Rate of Losses (determining the Level of Losses, i.e., the interviews whose low quality was discovered during paper preparation) and Activities Waiting (i.e., interviews on the basis of which papers are to be prepared). A certain preparation time for each paper is selected (here, 1 week). The Coeff(icient) of Article stands for the number of interviews that form the basis of one paper (here, three). The Stream of Articles is composed of the number of articles equal to the Sum of Articles, which are sent to journals.
As mentioned before, here we consider only one scenario for papers submission, denoted as V 1 2 , i.e., the publications are first submitted to journals guaranteeing a higher number of points (here, 100), and in the case of rejection, they are submitted, possibly after some corrections, to journals with a lower number of points. This is illustrated by the following model (Figure 8).
The first step involves sending a paper to 100-point journals, where the waiting time for the paper review is set to a fixed value, and the outcome of the review is a random variable with normal distribution with parameters given by “Coeff(icients) of Accept(ed) Article for 100 points”, constructed on the basis of experience with highly pointed journals. Streams of rejected papers (Rejected Articles for 100 Points) and of accepted articles (Accepted Articles for 100 Points) arise. Depending on the rejection moment, it may be possible to introduce the changes proposed by the reviewers and to send the paper to a 20-point journal (Sum of Sent Articles for 20 Points), or else the paper is not sent further because of the lack of time within the project duration (Sum of Lost 100-Point Articles After Late Rejection). The same procedure, if there is time, is applied to 20-point and then 5-point journals.
The outcomes of simulation are shown in Figure 9:
In the first step, all 20 papers are sent to 100-point journals. Many of them are gradually rejected—here, in total, 16. Only four are accepted. The Sum of Lost 100 Point Articles After Late Rejection is 0, which means that there was time to send, after some correction, the 16 rejected papers to 20-point journals.
As far as journals of 20 points are concerned, the simulation led to the result that seven papers were accepted and nine rejected. Of the nine rejected papers, five were rejected too late to send them to a 5-point journal (Sum of Lost 20 Point Articles After Rejection). Four papers were sent to a 5-point journal. Here, all of the sent papers were accepted.
To sum up, 20 papers were submitted, and 15 were accepted within the project duration: four papers in 100-point journals, seven in 20-point journals, and four in 5-point journals. Altogether, 560 points were gained.

6.3.3. Analysis of Results

In the above calculations, we analyzed thoroughly one strategy, { V 4 1 , V 1 2 } , which consists of choosing a project team composed of four BRs supported by two URs and two ERs in the following paper submission pattern: the publications are first submitted to journals guaranteeing a higher number of points (here, 100), and in the case of rejection, they are submitted, possibly after some corrections, to journals with a lower number of points. Let us consider three different success definitions for the research project in question:
  • The project will be fully successful if at least 60 high-quality interviews are conducted, at least 20 papers are published, and at least 600 points for publication are attained.
  • The project will be fully successful if at least 60 high-quality interviews are conducted, at least 30 papers are published, and at least 500 points for publication are attained.
  • The project will be fully successful if at least 60 high-quality interviews are conducted, at least 10 papers are published, and at least 480 points for publication are attained.
The three success definitions lead to three versions of the goal programming model, whose parameters and the values of the objective function are presented in Table 1. Let us remind the reader that, for the strategy selected here, we have: C 1 = 60 ,   C 2 = 15 ,   C 3 = 560 .
In Table 1, we can see that the same strategy can be evaluated differently depending on the understanding of research project success. Project success interpretation I is the most ambitious with respect to the number of points, and here, the value of the objective function is the highest and thus the worst. Therefore, if the university attaches much importance to points, which in the discussed case (Poland) are unstable, the strategy selected would be evaluated as not very good. However, if the university in question, aware of the unstable nature of points, attaches more importance to the number of publications (success definition II), the evaluation of the strategy would be much better. On the other hand, if the policy of the university is to give more importance to the long-term project outputs and to the well-being of the researchers, success definition III might be adopted. In this definition, neither the points nor the number of the “quick” papers, which are to be produced within the project duration on the basis of a short sequence of interviews, are very important. What counts is mainly the number of high-quality interviews, which potentially correspond to new knowledge and high-quality summarizing papers that would be published in the future and might be the basis of further research.
Of course, here we analyzed only one strategy. Other feasible strategies should also be analyzed in order to make the final choice. It is clear that the selection of project strategy is strongly influenced by the understanding of project success, and other criteria should also be taken into account—for example, the employment cost and the status of project team members. For example, if criterion C 1 has a lower C 1 m i n value, strategies { V 1 1 , V 1 2 } or { V 2 1 , V 1 2 } might be preferred because of lower research work cost, of course once the values of C 2 and C 3 have been subjected to simulation. In other situations, when neither the employment cost nor a high value of C 1 are very important, strategy { V 3 1 , V 1 2 } might be valued higher, because here the project team is composed of experienced researchers of high prestige. Of course, the second strategy element usually has more variants (e.g., submitting only to prestigious journals, even in the case of rejection trying to modify the paper and submit it to another equally highly pointed journal, or other submission patterns), and the strategy itself always has more than two elements. However, in each case, simulation, based on a respectively extended model, will assist us in evaluating various strategies according to our quantitative success criteria and preferences and the policy of our organization.

7. Conclusions

In this paper, we proposed a general model supporting the decision-maker in the selection of the best strategy (among the feasible ones) for the research project he or she is going to implement, keeping in mind the project environment and the project success understanding and criteria (and their aggregation method) that the decision-maker prefers. The model is based on a simulation of various strategies. The success of the project is measured according to the preferences of the decision-maker, and the best strategy can thus be selected.
The case study used here to illustrate the approach concerns a research project implemented at a public university in Poland and is interpreted in the context of Polish public universities, which are evaluated by means of an often changing, rather complicated algorithm based on a ministerial list of journals with assigned points. The case study shows that researchers, even though they should be, to a large extent, independent, often have to fit their university policy and define their project success accordingly, and this influences the strategy that should be selected for the project. Simulation can be used to choose such a strategy that leads to the best compromise between the university policy and the objectives of the researchers. In Polish circumstances, where the algorithm of university evaluation changes constantly, simulation provides a perfect tool for researchers to adapt their research project strategies to the given situation. One simulation model, constructed, e.g., in a free system like Vensima, can be modified each time changes in public universities’ laws are introduced (which is the case in contemporary Poland).
It has to be underlined that, although the case study discussed here is based on the Polish system, the case study itself and all of the corresponding conclusions are by no means limited to Poland. In many countries see, i.e., (Despotis et al. 2015), research activities are evaluated in a way which is not always seen as satisfactory by the researchers (Betta et al. 2017) and is often based on the numerical characteristics of publications, which are equal to the numerical characteristics of the journals. Thus, in the case of other countries, the points assigned by the Polish government should be replaced by other relevant characteristics used in the given context, like Impact Factor.
The model proposed in this paper is largely simplified. It does not take into account the following elements:
  • numerous other possible research project success criteria, especially those reaching beyond the project termination (it has to be underlined here that the simulation system can be run in such a way that a longer horizon than that of project duration is taken into account);
  • other features of various project strategies (apart from success criteria), e.g., the cost or implementation difficulty;
  • numerous other research project strategy elements (Section 3);
  • approaches to the multicriteria project success evaluation problem other than the basic form of goal programming.
However, the simplified model and the case study show that:
  • the problem of determining the research project success definition is complex and equivocal;
  • the selection of research project strategy cannot be detached from the problem of defining project success;
  • in order to be as successful as possible (according to his or her success understanding and according to the policy of the parent organization), the project manager must choose carefully one of many feasible project strategies. If he or she disregards this step, they may put much effort in pushing the project in a non-desired direction;
  • simulation, applied to projects before their start, may be helpful, because it can provide an indication of which strategy to choose in order to work toward the desired direction. In addition, the model can be modified each time the policy of the government or of the parent organization changes.
Of course, the proposed model has important limitations, especially as far as its validation and verification are concerned. Further research and case studies are needed to elaborate an efficient system supporting the selection of research project strategy in a given project environment and for a given project success understanding. In the future, the proposed model will be verified using more types of research projects from different areas and different universities, research institutions of other types and industries, and with different types of financial sources.
The model will also be extended with additional tools. Methods of linguistic dialogue with the decision-maker might be useful here. Also, the problem of parameter selection for the model is not an easy one to solve one. Here, we selected the parameters largely subjectively on the basis of our experience in publishing papers, but a database with the relevant information should be constructed so that simulation models are as close as possible to reality.
To sum up, the use of simulation in selecting project strategy in the function of project success understanding seems to be a very promising research direction, and not only for research projects. We believe this paper offers a contribution to the development of this research direction.

Author Contributions

Conceptualization, D.K. and S.S.; methodology, D.K. and S.S.; software, S.S.; formal analysis, D.K.; investigation, D.K. and S.S.; writing, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aaron, Shenhar, Dov Dvir, Thomas G. Lechler, and Peerasit Patanakul. 2011. Project Strategy-The Missing Link. Project Management Journal. [Google Scholar] [CrossRef]
  2. Artto, Karlos A., Juha-Matti Lehtonen, and Juha Saranen. 2001. Managing Projects Front-End: Incorporating a Strategic Early View to Project Management with Simulation. International Journal of Project Management 19: 255–64. [Google Scholar] [CrossRef]
  3. Artto, Karlos, Jaakko Kujala, Perttu Dietrich, and Miia Martinsuo. 2008. What Is Project Strategy? International Journal of Project Management. [Google Scholar] [CrossRef] [Green Version]
  4. Betta, Jan, Joanna Jastrzębska, Kazimierz Frączkowski, Barbara Gładysz, Dorota Kuchta, Ewa Prałat, Ewa D. Marchwicka, Paweł M. Rola, Katarzyna A. Walecka-Jankowska, Edyta Ropuszyńska-Surma, and et al. 2017. Success and Failure Factors of R&D Projects at Universities in Poland and France. In Business Risk in Changing Dynamics of Global Village: Proceedings of the 1st International Conference on Business Risk in Canging Dynamics of Global Village, Nysa, Poland, 26–27 April 2017. Nysa: University of Applied Sciences, pp. 265–78. [Google Scholar]
  5. Cardona-Meza, Luz, and Gerard Olivar-Tost. 2017. Modeling and Simulation of Project Management through the PMBOK® Standard Using Complex Networks. Complexity, 1–12. [Google Scholar] [CrossRef] [Green Version]
  6. Chou, Jui-Sheng. 2011. Cost Simulation in an Item-Based Project Involving Construction Engineering and Management. International Journal of Project Management 29: 706–17. [Google Scholar] [CrossRef]
  7. Clemente-Gallardo, Jesús, Alfredo Ferrer, David Íñiguez, Alejandro Rivero, Gonzalo Ruiz, and Alfonso Tarancón. 2019. Do Researchers Collaborate in a Similar Way to Publish and to Develop Projects? Journal of Informetrics 13: 64–77. [Google Scholar] [CrossRef]
  8. Coyle, Geoff. 1998. The Practice of System Dynamics: Milestones, Lessons and Ideas from 30 Years Experience. System Dynamics Review 14: 343–65. [Google Scholar] [CrossRef]
  9. Dale, Barrie G., M. B. F. Elkjaer, A. van der Wiele, and A. R. T. Williams. 2001. Fad, Fashion and Fit: An Examination of Quality Circles, Business Process Re-Engineering and Statistical Process Control. International Journal of Production Economics 73: 137–52. [Google Scholar] [CrossRef]
  10. Davis, Kate. 2014. Different Stakeholder Groups and Their Perceptions of Project Success. International Journal of Project Management 32: 189–201. [Google Scholar] [CrossRef]
  11. Despotis, Dimitris K., Gregory Koronakos, and Dimitris Sotiros. 2015. A Multi-Objective Programming Approach to Network DEA with an Application to the Assessment of the Academic Research Activity. Procedia Computer Science 55: 370–79. [Google Scholar] [CrossRef] [Green Version]
  12. Eilat, Harel, Boaz Golany, and Avraham Shtub. 2008. R&D Project Evaluation: An Integrated DEA and Balanced Scorecard Approach. Omega 36: 895–912. [Google Scholar] [CrossRef]
  13. Eskerod, Pernille, and Tina Larsen. 2018. Advancing Project Stakeholder Analysis by the Concept ‘Shadows of the Context. ’ International Journal of Project Management 36: 161–69. [Google Scholar] [CrossRef]
  14. Fang, Chao, and Franck Marle. 2012. A Simulation-Based Risk Network Model for Decision Support in Project Risk Management. Decision Support Systems 52: 635–44. [Google Scholar] [CrossRef]
  15. Forozandeh, Mohammad, Ebrahim Teimoury, and Ahmad Makui. 2018. A Model for Network Design of Supply Chain Management in Research Projects. Uncertain Supply Chain Management 6: 407–22. [Google Scholar] [CrossRef]
  16. Forrester, Jay W. 1968. Industrial Dynamics—After the First Decade. Management Science 14: 398–415. [Google Scholar] [CrossRef] [Green Version]
  17. Fu, Linglan. 2017. Project Management Based on Computer Simulation Technology. Paper presented at 2017 International Conference on Smart City and Systems Engineering (ICSCSE), Changsha, China, November 11–12; New York: The Institute of Electrical and Electronics Engineers, Inc., pp. 82–84. [Google Scholar] [CrossRef]
  18. Garcia, Clara, and Luis Sanz-Menéndez. 2004. Competition for Funding as an Indicator of Research Competitiveness: The Spanish R&D Government Funding. Scientometrics 64: 271–300. [Google Scholar]
  19. Ghazinejad, Masoumeh, Bassam Hussein, and Youcef J.-T. Zidane. 2018. Impact of Trust, Commitment, and Openness on Research Project Performance: Case Study in a Research Institute. Social Sciences 7: 22. [Google Scholar] [CrossRef] [Green Version]
  20. Ghomi, Seyyed M. T. Fatemi, and B. Ashjari. 2002. A Simulation Model for Multi-Project Resource Allocation. International Journal of Project Management 20: 127–30. [Google Scholar] [CrossRef]
  21. Golenko-Ginzburg, Dimitri, Aharon Gonik, and Zohar Laslo. 2003. Resource Constrained Scheduling Simulation Model for Alternative Stochastic Network Projects. Mathematics and Computers in Simulation 63: 105–17. [Google Scholar] [CrossRef]
  22. Higher Education Research and Development Survey (HERD). 2018. Available online: https://www.nsf.gov/statistics/srvyherd/ (accessed on 10 December 2019).
  23. Iluz, Michal, and Avraham Shtub. 2015. Simulation Based Planning of the Fuzzy Front End Stage of a Project. Procedia CIRP 36: 106–10. [Google Scholar] [CrossRef] [Green Version]
  24. Jordan, Gretchen, Jerald Hage, Jonathon Mote, and Bradford Hepler. 2005. Investigating Differences among Research Projects and Implications for Managers. R and D Management 35. [Google Scholar] [CrossRef]
  25. Kennedy, Deanna M., Sara A. McComb, and Ralitza R. Vozdolska. 2011. An Investigation of Project Complexity’s Influence on Team Communication Using Monte Carlo Simulation. Journal of Engineering and Technology Management 28: 109–27. [Google Scholar] [CrossRef]
  26. Kerzner, Harold. 2005. Project Management: A Systems Approach to Planning, Scheduling, and Controlling. New York: John Wiley & Sons, Inc. [Google Scholar]
  27. Klaus-Rosińska, Agata. 2019. Success of Research and Research and Development Projects in the Science Sector. Wrocław: Wroclaw University of Science and Technology. [Google Scholar]
  28. Kouskouras, Konstantinos G., and Andreas C. Georgiou. 2007. A Discrete Event Simulation Model in the Case of Managing a Software Project. European Journal of Operational Research 181: 374–89. [Google Scholar] [CrossRef]
  29. Kozarkiewicz, Alina. 2016. Project Strategy and Strategic Project Management: The Understanding and the Perception of Relevance among Polish Project Management Practitioners. Paper presented at Conference Project Management Development—Practice and Perspectives, Fifth International Scientific Conference on Project Management in the Baltic Countries, Riga, Latvia, April 14–15; Riga: Professional Assotiacion of Project Managers, pp. 187–98. [Google Scholar]
  30. Kremljak, Zwonko, Izzy Palcic, and Ciril Kafol. 2014. Project Evaluation Using Cost-Time Investment Simulation. International Journal of Simulation Modelling 13: 447–57. [Google Scholar] [CrossRef]
  31. Kuchta, Dorota, Barbara Gładysz, Dorota Skowron, and Jan Betta. 2017. R&D Projects in the Science Sector. R and D Management 47: 88–110. [Google Scholar] [CrossRef] [Green Version]
  32. Kurihara, Kenzo, and Nobuyuki Nishiuchi. 2002. Efficient Monte Carlo Simulation Method of GERT-Type Network for Project Management. Computers & Industrial Engineering 42: 521–31. [Google Scholar] [CrossRef]
  33. Lai, Yu-Ting, Wei-Chih Wang, and Han-Hsiang Wang. 2008. AHP- and Simulation-Based Budget Determination Procedure for Public Building Construction Projects. Automation in Construction 17: 623–32. [Google Scholar] [CrossRef]
  34. Majtán, Miroslav, Martin Mizla, and Pavol Mizla. 2014. Utilization of Simulations in Project Management [Využitie Simulácií Pri Manažovaní Projektu]. Ekonomicky Casopis 62: 508–21. [Google Scholar]
  35. Martens, Cristina Dai Prá, Franklin Jean Machado, Mauro Luiz Martens, Filipe Quevedo Pires de Oliveira e Silva, and Henrique Mello Rodrigues de Freitas. 2018. Linking Entrepreneurial Orientation to Project Success. International Journal of Project Management 36: 255–66. [Google Scholar] [CrossRef]
  36. Martín García, Juan. 2019. Theory and Practical Exercises of System Dynamics. Barcelona: Juan Martín García. [Google Scholar]
  37. Meadows, Donella H. 2009. Thinking in Systems: A Primer. London and Sterling: Earthscan. [Google Scholar]
  38. Menipaz, Ehud, and Avner Ben-Yair. 2002. Three-Parametrical Harmonization Model in Project Management by Means of Simulation. Mathematics and Computers in Simulation 59: 431–36. [Google Scholar] [CrossRef]
  39. Moohebat, Mohammadreza, Asefeh Asemi, and Mohammad Davarpanah Jazi. 2010. A Comparative Study of Critical Success Factors (CSFs) in Implementation of ERP in Developed and Developing Countries. International Journal of Advancement in Computing Technology 2: 99–110. [Google Scholar] [CrossRef] [Green Version]
  40. Morales, Peter J., and Dennis Anderson. 2013. Process Simulation and Parametric Modeling for Strategic Project Management. Springer Briefs in Electrical and Computer Engineering. New York: Springer. [Google Scholar]
  41. Morecroft, John. 2015. Strategic Modelling and Business Dynamics: A Feedback Systems Approach, 2nd ed. Hoboken: John Wiley & Sons Ltd, pp. 1–466. [Google Scholar] [CrossRef]
  42. Ourdev, Ivan, Simaan Abourizk, and Mohammed Al-Bataineh. 2007. Simulation and Uncertainty Modeling of Project Schedules Estimates. Paper presented at the 2007 Winter Simulation Conference, Washington, DC, USA, December 9–12; New York: The Institute of Electrical and Electronics Engineers, Inc., pp. 2128–33. [Google Scholar] [CrossRef]
  43. Rasmussen, Thomas, Niels Hansen, and Sanja Lazarova-Molnar. 2017. A Discrete-Event Simulation Tool for Decision Support in Selecting Project Scheduling Strategies. Paper presented at the 7th International Conference on Modeling, Simulation, and Applied Optimization (ICMSAO), Sharjah, United Arab Emirates, April 4–6; New York: The Institute of Electrical and Electronics Engineers, Inc., pp. 1–5. [Google Scholar] [CrossRef]
  44. Revilla, Elena, Joseph Sarkis, and Aurelia Modrego Rico. 2003. Evaluating Performance of Public–private Research Collaborations: A DEA Analysis. Journal of the Operational Research Society 54: 165–74. [Google Scholar] [CrossRef]
  45. Sabeghi, Narjes, Hamed R. Tareghian, Erik Demeulemeester, and Hasan Taheri. 2015. Determining the Timing of Project Control Points Using a Facility Location Model and Simulation. Computers & Operations Research 61: 69–80. [Google Scholar] [CrossRef]
  46. Senge, Peter M. 1997. The Fifth Discipline. Measuring Business Excellence 1: 46–51. [Google Scholar] [CrossRef] [Green Version]
  47. Skorupka, Dariusz, Artur Duchaczek, and Magdalena Kowacka. 2016. Modified, Stakeholders Perspective Based DEA Approach in IT and R&D Project Ranking. Paper presented at the 18th International Conference on Enterprise Information Systems—Volume 1: ICEIS, Rome, Italy, April 25–28; Setúbal: SciTePress Digital Library, pp. 158–65. [Google Scholar] [CrossRef]
  48. Söderlund, Jonas. 2008. Reinventing Project Management: The Diamond Approach to Successful Growth and Innovation. By Aaron J. Shenhar and Dov Dvir. R & D Management 38: 355–56. [Google Scholar] [CrossRef]
  49. Song, Wen, Hui Xi, Donghun Kang, and Jie Zhang. 2018. An Agent-Based Simulation System for Multi-Project Scheduling under Uncertainty. Simulation Modelling Practice and Theory 86: 187–203. [Google Scholar] [CrossRef]
  50. Srinivasan, N. P., and S. Dhivya. 2019. An Empirical Study on Stakeholder Management in Construction Projects. Materials Today: Proceedings. [Google Scholar] [CrossRef]
  51. Sterman, John. 2018. System Dynamics at Sixty: The Path Forward. System Dynamics Review 34: 5–47. [Google Scholar] [CrossRef]
  52. Tarantola, D., R. Macklin, M. P. Kieny, S. Osmanov, M. Stobie, and Catherine Hankins. 2007. Ethical Considerations Related to the Provision of Care and Treatment in Vaccine Trials. Vaccine 25: 4863–74. [Google Scholar] [CrossRef] [PubMed]
  53. Uzzafer, Masood. 2013. A Simulation Model for Strategic Management Process of Software Projects. Journal of Systems and Software 86: 21–37. [Google Scholar] [CrossRef]
  54. Wang, Lin, Martin Kunc, and Si-jun Bai. 2017. Realizing Value from Project Implementation under Uncertainty: An Exploratory Study Using System Dynamics. International Journal of Project Management 35: 341–52. [Google Scholar] [CrossRef] [Green Version]
  55. Wang, R., X. Li, X. Song, L. Dai, and Y. Li. 2019. A Simulation Approach Based Project Schedule Assessment. Paper presented at the 9th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2019), Prague, Czech Republic, July 29–31; pp. 422–28. [Google Scholar]
  56. Wit, Anton de. 1988. Measurement of Project Success. International Journal of Project Management 6: 164–70. [Google Scholar] [CrossRef]
  57. Yuan, B., and J.-N. Huang. 2002. Applying Data Envelopment Analysis to Evaluate the Efficiency of R&D Projects—A Case Study of R&D in Energy Technology. In Technology Commercialization. Edited by S. A. Thore. Boston: Springer, pp. 111–34. [Google Scholar] [CrossRef]
Figure 1. Model for selecting the strategy of a research project in a given environment and for selected project success criteria and their selected aggregation.
Figure 1. Model for selecting the strategy of a research project in a given environment and for selected project success criteria and their selected aggregation.
Admsci 10 00018 g001
Figure 2. General Vensima simulation model for success criterion C 1 : the number of well-conducted interviews according to the information available within the project realization time.
Figure 2. General Vensima simulation model for success criterion C 1 : the number of well-conducted interviews according to the information available within the project realization time.
Admsci 10 00018 g002
Figure 3. (a) Simulation results for V 1 1 : Work Done and Work to Do. (b) Simulation results for V 1 1 : Level of Rework and Remaining Rework. (c) Simulation results for V 1 1 : Level of Losses and Remaining Losses.
Figure 3. (a) Simulation results for V 1 1 : Work Done and Work to Do. (b) Simulation results for V 1 1 : Level of Rework and Remaining Rework. (c) Simulation results for V 1 1 : Level of Losses and Remaining Losses.
Admsci 10 00018 g003aAdmsci 10 00018 g003b
Figure 4. (a) Simulation results for V 2 1 : Work Done and Work to Do; (b) Simulation results for V 2 1 : Level of Rework and Remaining Rework; (c) Simulation results for V 2 1 : Level of Losses and Remaining Losses.
Figure 4. (a) Simulation results for V 2 1 : Work Done and Work to Do; (b) Simulation results for V 2 1 : Level of Rework and Remaining Rework; (c) Simulation results for V 2 1 : Level of Losses and Remaining Losses.
Admsci 10 00018 g004
Figure 5. (a) Simulation results for V 3 1 : Work Done and Work to Do. (b) Simulation results for V 3 1 : Level of Rework and Remaining Rework. (c) Simulation results for V 3 1 : Level of Losses and Remaining Losses.
Figure 5. (a) Simulation results for V 3 1 : Work Done and Work to Do. (b) Simulation results for V 3 1 : Level of Rework and Remaining Rework. (c) Simulation results for V 3 1 : Level of Losses and Remaining Losses.
Admsci 10 00018 g005
Figure 6. (a) Simulation results for V 4 1 : Work Done and Work to Do. (b) Simulation results for V 4 1 : Level of Rework and Remaining Rework. (c) Simulation results for V 4 1 : Level of Losses and Remaining Losses.
Figure 6. (a) Simulation results for V 4 1 : Work Done and Work to Do. (b) Simulation results for V 4 1 : Level of Rework and Remaining Rework. (c) Simulation results for V 4 1 : Level of Losses and Remaining Losses.
Admsci 10 00018 g006aAdmsci 10 00018 g006b
Figure 7. The stochastic and discrete Vensima model of creating and submitting papers.
Figure 7. The stochastic and discrete Vensima model of creating and submitting papers.
Admsci 10 00018 g007
Figure 8. Vensima model of paper submission. The part referring to the first stage: submitting to 100-point journals and reacting to possible rejections.
Figure 8. Vensima model of paper submission. The part referring to the first stage: submitting to 100-point journals and reacting to possible rejections.
Admsci 10 00018 g008
Figure 9. Results of the simulation of paper submission. The part referring to the first stage: submitting to 100-point journals and reacting to possible rejections.
Figure 9. Results of the simulation of paper submission. The part referring to the first stage: submitting to 100-point journals and reacting to possible rejections.
Admsci 10 00018 g009
Table 1. Evaluation of one project strategy ( { V 4 1 , V 1 2 } ) in the function of project success interpretation.
Table 1. Evaluation of one project strategy ( { V 4 1 , V 1 2 } ) in the function of project success interpretation.
Project Success Interpretation C 1 m i n C 2 m i n C 3 m i n Obj. Function (2)
I60206000 + 5 + 40 = 45
II60305000 + 15 + 0 = 15
III60104800 + 0 + 10 = 10

Share and Cite

MDPI and ACS Style

Kuchta, D.; Stanek, S. Application of Simulation to Selecting Project Strategy for Autonomous Research Projects at Public Universities. Adm. Sci. 2020, 10, 18. https://doi.org/10.3390/admsci10010018

AMA Style

Kuchta D, Stanek S. Application of Simulation to Selecting Project Strategy for Autonomous Research Projects at Public Universities. Administrative Sciences. 2020; 10(1):18. https://doi.org/10.3390/admsci10010018

Chicago/Turabian Style

Kuchta, Dorota, and Stanisław Stanek. 2020. "Application of Simulation to Selecting Project Strategy for Autonomous Research Projects at Public Universities" Administrative Sciences 10, no. 1: 18. https://doi.org/10.3390/admsci10010018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop