Next Article in Journal
Decision-Making Process Regarding the Use of Mobile Phones in Romania Taking into Consideration Sustainability and Circular Economy
Previous Article in Journal
Improving Cybersafety Maturity of South African Schools
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Review of Indicators for Evaluating the Effectiveness of Digital Public Services

by
Glauco Vitor Pedrosa
*,
Ricardo A. D. Kosloski
,
Vitor G. de Menezes
,
Gabriela Y. Iwama
,
Wander C. M. P. da Silva
and
Rejane M. da C. Figueiredo
Information Technology Research and Application Center (ITRAC), University of Brasilia (UnB), Federal District, 72444-240 Brasilia, Brazil
*
Author to whom correspondence should be addressed.
Information 2020, 11(10), 472; https://doi.org/10.3390/info11100472
Submission received: 27 July 2020 / Revised: 11 September 2020 / Accepted: 15 September 2020 / Published: 6 October 2020
(This article belongs to the Section Review)

Abstract

:
Effectiveness is a key feature of good governance, as the public sector must make the best use of resources to comply with the needs of the population. Several indicators can be analyzed to evaluate the effectiveness of a service. This study analyzes theoretical references and presents a systematic research of indicators to assess the effectiveness of digital public services in the perspective of the user. First, a literature review was carried out to identify the most common indicators employed to evaluate effectiveness in the public sector; then, the perception of academics and professionals regarding digital government was assessed to analyze the relevance of these indicators. As a result, two groups of indicators were found: technical factors based on service quality and usefulness of the service. This work contributes to enrich the discussion on how to create an effective model to evaluate the effectiveness of public services to guarantee quality standards and comply with the expectations of users.

1. Introduction

In an era of digital transformations, traditional bureaucratic organizational structures tend to be replaced by e-government projects. Information and Communications Technology (ICT) has enabled a reduction in administrative costs and leveraged the immense potential of technology for more efficient public services. Digital public services (also called e-government) are delivered to the citizens by the government using ICTs. Examples of digital public services include income tax declarations, notification and assessment; birth and marriage certificate; renewing a driver’s license as well as other kinds of request and delivery of permissions and licenses. By using digital services, the government can deliver information and services to citizens anytime, anywhere, and on any platform or device.
The adoption of digital services in the public sector is complex and challenging, particularly in developing countries [1]. This is due to the inefficiency of public organizations, the shortage of skilled human resources, the poor ICT infrastructure, the low standards of living and the large rural population [2]. Many developing countries have implemented specific e-government initiatives for making a full use of the potential benefits of e-government [3]. In this sense, the demand for a systematic, continuous and effective assessment of the provision of digital public services contrasts with the lack of clarity regarding performance indicators as service evaluation is a broad theme [4].
The development of e-government creates a need for its continuous evaluation around the world [5]. There are several approaches to evaluate the performance of e-government in the literature. The readiness assessment, for example, examines the maturity of the e-government environment by evaluating the awareness, willingness, and preparedness of e-government stakeholders and identifying the enabling factors for the development of e-government [6]. The main advantage of this approach is using quantifiable indicators that provided an overview of the maturity of e-government [3]. However, using the readiness perspective to evaluate e-government is often criticized for neglecting the demands of citizens and the impact of digital services on citizens and the society.
There have been many others attempts for evaluating e-government across the global regions. In the UK, the performance of digital public services is assessed considering three public value evaluation dimensions: quality of public service delivery, outcomes, and trust [7]. Quality of public service delivery is evaluated through the level of information provision, level of use, availability of choice, user satisfaction, user priorities, fairness, and cost savings. The European e-Government Action Plan 2016–2020 [8] assesses e-government as the reduction of administrative burden on citizens, improvement of citizens’ satisfaction and inclusiveness of public services. Russian Federation [9] assesses effectiveness as the quality of public services, trust, and outcomes. The Agency for the Development of Electronic Administration in France [10] proposes a framework for evaluating the public value of information technology in government with a focus on the financial benefits of e-government projects for citizens. One of the critical issues is how to evaluate and assess the successfulness of such projects. The traditional value assessment methods existing in the business field are not good enough to cope with the issue, as business and government hold different value perspectives and have different concerns [11].
Some of the more traditional models for service evaluation are based on efficiency, efficacy and effectiveness [12]. Effectiveness can be defined as the perception of changes made; efficacy refers to the extent to which the intended goals were achieved, and efficiency means doing more with fewer resources [13]. Among these three criteria, effectiveness plays a key role to produce desired social outcomes. Indeed, it aims to guarantee practical results, as it would be useless to have the most satisfactory result if it were not possible to realize the impact of the service on society. In this sense, effectiveness has a lot to do with the search for users’ satisfaction.
The great challenge for the measurement of effectiveness in the public sector is to obtain valid data to inform the Public Administration of the results the services. There is no consensus in the literature on what is the best indicator to measure the effectiveness of a public service. There is a gap on this subject, mainly regarding the importance of the indicators considering the needs of the user. In this sense, evaluating the effectiveness of public services from the perspective of the user is an opportunity to detect the factors that hinder or facilitate their impact on society.
This work is a systematic research carried out to identify the most common effectiveness-based indicators of compliance with the expectation of users of digital public services. For this purpose, a bibliographic review was performed to retrieve the most frequent indicators of effectiveness in the literature. Then, a perception research was undertaken with academics and professionals in digital government to measure the relevance of the indicators and identify correlations between them. We discuss the results and present suggestions for the uptake of the indicators in the evaluation of public services. This perception research with experts in the area of digital government is an important contribution of this work to the literature, since it presents a quantitative analysis of the relevance of each indicator to measure effectiveness of public services. Based on the results, we present some discussions and suggestions for the uptake of the indicators in the evaluation of public services.
This paper is organized as follows: Section 2 presents the steps of our systematic literature mapping. Section 3 briefly discusses some important results raised by our literature review; Section 4 presents the results of a perception survey carried out with experts in digital government. Finally, Section 5 displays the final considerations of this paper.

2. Methodology: Mapping Study Planning

To support this research, we used methods of bibliometric studies [14] and systematic literature reviews [15]. Bibliometric studies are quantitative approaches to identify, evaluate and monitor published studies available in each area or line of research. They aim to examine how disciplines, fields of research, specialties and individual publications relate. Unlike bibliometric studies, the Systematic Literature Review is performed to identify publications and analyze their contents and consolidated results. Formal procedures for retrieving documents from digital publication bases, selection criteria, and well-established routines to extract and record data are followed. In this work, we adopted the Population, Intervention, Comparison, Outcome, Context (P.I.C.O.C) criteria to frame research questions [15,16]. The population of this study was defined as “e-gov”, the intervention was defined as “effectiveness”, the outcome was defined as “definition”, and the context was defined as “government”. A comparison scheme was not set. This formal procedure was useful to postulate an initial search string.
In this study, the search was performed in SCOPUS database. This database was chosen because of its advanced search engines, which allow the use of complex Boolean expressions, as well as several useful filters to refine the results of the publications retrieved. The digital database of SCOPUS also indexes publications from different sources, including ACM, Springer Link, and IEEE. Moreover, it is expected that a high percentage of articles in these databases are also indexed in SCOPUS [17].
The research problem was: How does effectiveness support evaluations of digital public services in the perspective of the user? Based on this problem, the following research questions were:
  • How is effectiveness defined in the e-government context?
  • What indicators of effectiveness are measured?
Then, an initial query string was defined as presented in Table 1.
After running the search string in Table 1, a total of 550 papers were found. Then, we used SCOPUS’ own resources to evaluate this set of papers bibliometrically. Figure 1 shows that most of the publications retrieved were published after 2009. Before this year, only four documents were, in fact, related to our study. This supported the idea of refining the search for publications since 2009. The ongoing year of 2020 was also included in the search, so papers still in press were considered.
Figure 2 shows that the subject is still little explored, as the top 10 authors found have published between 2 and 5 papers. Figure 3a,b show the results by type of document and area. Newspaper articles and conference papers, in addition to papers in the areas of computer science, social science, business and management, and engineering, comprised most publications retrieved.
Next, the metadata generated by SCOPUS regarding the papers retrieved were evaluated by the Vosviewer Software 1.6.15. This evaluation generated the graphs presented in Figure 4 and Figure 5. Figure 4 refers to the co-citation of keywords used in the 550 papers retrieved by the search string in Table 1. Figure 5 refers to the co-citation of authors, which is a useful tool to identify the list of most cited studies, authors, keywords, etc.; therefore, it is considered to have the greatest influence on the study. In fact, the word cloud of the co-citations overlap with the frequency of authors described in Figure 2.
The search string was refined after the bibliometric evaluation of the preliminary results. When the new search string, presented in Table 2, was run, 289 records were retrieved. The titles and abstracts of these 289 documents were read, and the following criteria were applied to select those that would be read in full:
  • the paper should contain the keywords in the title or abstract;
  • the paper should contain concepts related to digital government;
  • the paper should answer at least one research question;
  • the paper should approach the point of view of users of digital government services.
By applying these criteria, the set of 289 papers was reduced to 120 papers, which were read by our research team. The data collected from the papers were listed on a digital spreadsheet in a collaborative work environment. The consolidated results are reported as follows.

3. Mapping Study—Discussion of Results

After the bibliographic research, we performed a quantitative analysis to identity the indicators related to effectiveness found in the 120 selected papers. Table 3 shows the most common indicators to evaluate effectiveness of digital public services.
The most frequent indicators of effectiveness are related to the ease of the service and usefulness. This is because many papers applied the Technology Acceptance Model (TAM) [18] to evaluate the effectiveness of digital services. In this model, effectiveness is analyzed considering the acceptance of new technologies by users of digital public services, i.e., the model seeks to predict the acceptance of systems based on the measurement of the intentions of the user and to explain such intentions through their attitudes, subjective norms, perceived usefulness, perceived ease of use and related variables. TAM consists of six dimensions, illustrated in Figure 6: (i) external variables, (ii) perceived usefulness, (iii) perceived ease of use, (iv) attitude towards using, (v) behavioral intention to use and (vi) actual system use.
One of the key issues of TAM is the influence of external variables on the beliefs, attitudes and behaviors of the users. Then, the impact of perceived usefulness and perceived ease of use on the uptake of a system is observed. Still, these indicators are related to the fact that greater ease of use can improve performance, allowing the user to produce more with the same effort. Therefore, it impacts the perceived usefulness. Experiences of applying TAM to the public sector in some countries, such as India [19,20], Saudi Arabia [21,22], Tanzania [23], Jordan [24] have been studied. Other works, such as [22], integrated the DeLone and McLean IS success model [25] with TAM. Also, culture issues have been taken into consideration by using the theory of personal values.
The ‘useful’ indicator is a fundamental driver of usage intentions affecting the perceived effectiveness of digital public services. This indicator has been used extensively in information systems and technology research [26] and refers to “the extent to which a person believes that using a particular technology will enhance her/his job performance” [18]. The literature of service management has stated that the service delivery system has a direct impact on service value [27]. It was argued that the perceived e-service system delivery process is directly related to the usefulness of the service yielding a component of value for public services. Some works (e.g., [28]) presented an extension of TAM model to define the determinants of perceived usefulness and intentions of use related to social influence and cognitive processes. Such determinants are also related to the external variables of the original model.
The ‘simple’ indicator refers to the process of user interaction with the services. It reflects on the usability of the services [29], as a service can be easy to, while the delivery process demands redundant steps and/or too much bureaucracy. According to [30], e-service delivery has greater potential for success in public sector tasks that have low or limited levels of complexity. Simple processes yield a better user experience and improve quality and consistency as simple e-services are easier to learn, easier to change and faster to execute [24].
Presently, security and privacy concerns are increasing with the rapid growth of online services and users are getting more and more reluctant to give their personal information online [31]. The ‘trustworthy’ indicator can refer to both the perception of security of the system used to provide the service and the trust in the institution that provides the service. In fact, trust in the system is important especially for services that use the Internet as a transaction channel [32], especially for those that require online payments [33], as this issue affects the quality and intention to use. The concept of trust regarding the use of e-government services is also justified by its relevance in the context of a political system, specific institutions or organizations, and political staff [34]. According to [35], trust in the institution and in the government is an important factor for the adoption and intention to use the service.
The use of communication technology has provided an opportunity to improve the quality of the service through electronic interactions [36]. The ‘available’ indicator refers to a variable of technical quality related to readiness and absence of interruptions in access to digital systems [37]. Methodologies have been proposed to describe the availability of e-service by modelling the evolutionary path of the digital interface between public agencies and users, such as the model proposed by the European Commission (2001) [38] in a report on the provision of public e-services to measure the level of online sophistication of the services. ‘available’ also means the coverage of the public service: it should be available to everyone, regardless of where the users live [39].
The ‘understandable’ indicator consists of the presentation of information aiming at simplicity in the execution of different transactions and navigation in the service journey. It can be associated with web design and service complexity. According to [24], the adequacy, attractiveness and good organization of information in websites makes the service more comprehensive to all citizens.
The ‘consistent’ indicator refers to the coherent maintenance and presentation of e-services in terms of design, organization and interactivity to optimize and meet the expectations of users [40]. According to [29], the consistency of a digital service is related both to the perception of how easy it is for users to find what they are looking for. E-Government services must be accessible and well-designed, and should follow established standards [41]. Maintaining consistency in the layout of the service is essential so that the pattern of interactions is the same for every process—once learned, it will be replicable in other contexts. In addition, the experience of use becomes much more interesting because there will be no feeling of being lost. Often the reason users do not interact with applications is this feeling, caused by the lack of consistency and standardization.
Finally, the ‘fast’ indicator refers to the ability to finish the service quickly. As organizations get prepared for digitalization, so must their IT departments. This means they have to respond more quickly to requests from different groups of users, increase infrastructure flexibility, and improve the use of the current resources [42]. An important advantage of ICT was to promote more efficient and cost-effective government, allow greater public access to information, and make government more transparent and accountable to citizens [43]. Such initiatives particularly benefit rural areas by connecting regional and local offices with central government ministries. These also allow national government agencies to communicate and interact with their local constituency and improve public services. To answer these needs, traditional approaches and modes of IT management are often insufficient. The public sector IT departments should adjust their operations as a response to digitalization efforts, for example, smart cities and digital transformation. In this sense, the IT development process within the organization should be improved, i.e., how the IT department can better respond to the needs of business units. For this purpose, adjustments are required both in management and daily operations. Moreover, changes should not be made only internally within the IT department, as the whole organization should be involved.

4. Perceptual Evaluation with Experts

A survey was carried out with experts to identify the relevance of each indicator (listed in the previous section) to evaluate the effectiveness of digital public services. For the purposes of our study, it would not be relevant to apply the survey directly to users, as this could impair the analysis. Everyday users have different profiles and different levels of knowledge; also, different services have different users. In this sense, several analyses could be done depending on the profiles of the users. This is an interesting topic for future work. However, the aim of our study was to present a non-subjective broad concept of effectiveness.
A questionnaire was developed and applied to a group of 46 people from the academic community and professionals in the area of digital government. In this group, there were directors from the Secretary of Digital Government within the Brazilian Ministry of Economy and professors from the University of Brasilia. Table 4 shows the questionnaire applied. For each statement, the person should select the options on a Likert-5 scale:
  • Strongly Disagree
  • Partially Disagree
  • Neutral
  • Partially Agree
  • Strongly Agree
Figure 7 shows the average scores obtained for each indicator. The highest achievable score is 5. It is noted that all indicators had high averages, which means that the professionals interviewed considered all indicators relevant to measure. The highest score was obtained by the ‘useful’ indicator, while the ’consistent’ indicator obtained the lowest score. These statistical analyses corroborate to define the importance of each indicator of effectiveness. There is no consensus in the literature on this subject, so the survey carried out with experts contributed to define “weights” for the indicators, which is an innovative feature of our work.

4.1. Statistical Analysis

The data collected were submitted to statistical analysis to verify the convergence and reliability of the information. Three statistical tests were used: the Kaiser–Meyer–Oklin (KMO) test, the Barlett sphericity test and the Cronbach’s Alpha. Table 5 shows the values obtained. As we were interested in the correlation between the indicators, these statistical analyses are standards in the area to measure the confidence and statistical significance of a sample.
The value for Cronbach’s Alpha suggests that all indicators measure the same characteristic; therefore, the indicators are reliable and demonstrate strong construct validity. The KMO indicates the proportion of data variance that can be attributed to a common factor. The closer to 1, the better the result, i.e., the more adequate the sample is to factor analysis. For this case, the KMO value obtained shows that there is a high correlation between the variables. Bartlett’s sphericity test is used to examine the hypothesis that variables are not correlated in the population. When analyzing Bartlett’s sphericity test, a level of significance is better than 0.001, which would lead to the rejection of the null hypothesis that the correlation matrix is an identity matrix, for a significance level of 0.05. This shows that there is a correlation between some variables. In both cases, the tests suggest that the data are adequate to a factor analysis.
As the ‘useful’ indicator achieved the highest score, we performed a t-test to verify if there was a statistically and practical significance between the ‘useful’ indicator and all the others. Table 6 shows the paired values for each indicator compared to ‘useful’. The results do not show a statistically significant effect (p < 0.05) between ‘useful’ and ‘ease of use’, ‘useful’ and ‘trustworthy’, ‘useful’ and ‘available’, and ‘useful’ and ‘simple’. However, for the other indicators, although the t-test is statistically significant, the size of the practical effect in the difference is not significant, considering the limit of r > 0.5. Therefore, there is no statistical evidence that the ‘useful’ indicator is actually superior to the others.

4.2. Principal Component Analysis

The data collected was also subjected to a principal component analysis to identify groups of correlated indicators. Oblique rotation (Varimax) was used due to the high expected inter-factor correlations. Given the exploratory nature of the study, the number of factors was determined keeping all factors with eigenvalues greater than 1.0. This produced a two-factor solution that explained 82.92% of the variation. The factorial loads for this solution can be seen in Table 7 (only loads greater than 0.40 are shown).
The factorial loads in Table 7 indicate that except for ‘consistent’, all other indicators of the same factor are correlated, since none of them has a complex structure, i.e., factorial loads above 0.40 in both the components. However, considering the factorial loads, the indicator ‘consistent’ is more correlated with Factor 1 then to Factor 2. Therefore, the factorial analysis allowed the identification of two 267 components: technical indicators of quality and usefulness.

5. Final Considerations

The evaluation of public services has become an important aspect of the decision-making process by managers and public institutions. This evaluation increases the probability of obtaining better results and finding unexpected results. Monitoring and evaluation are always based on indicators that assist in decision-making, allowing for better performance, for more rational planning and for a clearer and more objective accountability.
This study aimed to identify the indicators commonly adopted in the literature to measure the effectiveness of digital public services regarding the perception of the user. Based on a literature review and a subsequent perception survey with experts in digital government, two groups of semantically correlated indicators were found. Although the first group of indicators refers to factors commonly related to the quality of information, usability and technical performance of the service, the second one refers to the usefulness of the service. Based on the indicators and the statistical analyses carried out by our work, it is possible for the manager to create a more complete model for evaluating the effectiveness of their services, thus potentially increasing the satisfaction of its users with the service.
The ‘useful’ indicator may also refer to the value that citizens attribute to their experience in public services and that can be understood as “public value”. In other words, it refers to the provision of services that are actually necessary and will be used [40]. It provides a new way of thinking about the evaluation of government activities and a new conceptualization of the public interest and the creation of social value. In a citizen-centered approach, developing services without considering the demands of users may lead to low rates of service use. In this sense, the Public Administration should minimize wasteful and unnecessary public services to save costs that may generate fiscal stress. Providing appropriate services narrow the state management apparatus according to its core functions, whereby it can provide better services and respond to demands for transparency and accountability.
According to [28], due to the nature of public services, effectiveness can be considered based not only on the quality perceived individually by users, but also on its social interest. The social interest of a public service is related to the government’s duty-power in guaranteeing the basic rights of the citizens [44]. For essential services—for example, those related to health—the usefulness and social need for the service are clear to the citizens. On the other hand, the social importance of other services—for example, the payment of fees and taxes—is sometimes not so clear as the direct benefits are not noticeable for the citizen. This observation is important since the application and implication of service evaluation may be contingent on the perspective taken [45].
The literature is still unclear regarding how to measure the personal and social impact of public services. This is a gap for advances in research in this area, especially in developing countries [46]. Such impact could be analyzed, for example, considering the category of service delivery, i.e., the ways in which services are delivered to users. In this way, it is possible to capture a clear articulation of the nature, boundaries, components and elements of specific e-service experiences, and to further investigate the interaction between these factors and the dimensions of service quality [47]. Another possibility is to perform ethnography studies on the digital transformations of services. Ethnographic may be used as method to collect qualitative information from the observation of people carrying out daily tasks that interact in complex social environments [48]. An ethnographic analysis is pertinent when new technologies are studied as it helps to find out and explain why many services are not welcome or used.
Social indicators end up being less frequently assessed in the literature on the evaluation of digital services, but that does not make them less relevant. In recent years, the concept of public value has become popular in the United States, the European Union, Australia and even in developing countries due to its ability to investigate the performance of public services from the point of view of citizens [49].
Although all studies reviewed identified indicators of effectiveness, very few made recommendations about how to turn these qualitative indicators into quantitative scores. This is also the limitation of this work. Several studies noted difficulties associated with the development of quantitative measures. This work contributed to enrich the discussion on the evaluation of effectiveness as a tool to measure the quality of a public service complying with the expectations of the users. The next stage of our study is to develop a practical model and apply it to real users. This model is being developed in partnership with the Digital Government Secretariat of the Brazilian Ministry of Economy.

Author Contributions

Data curation, G.V.P., R.A.D.K., V.G.d.M. and G.Y.I; project administration, G.V.P., R.M.d.C.F. and W.C.M.P.d.S.; supervision, G.V.P., R.M.d.C.F. and W.C.M.P.d.S.; writing—original draft preparation, G.V.P., R.A.D.K., V.G.d.M. and G.Y.I.; writing—review and editing, G.V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This research is part of a cooperation agreement between the Information Technology Research and Application Center (ITRAC) and the Brazilian Ministry of Economy.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ICTInformation and Communication Technology
ITInformation Technology
KMOKaiser–Meyer–Olkin
TAMTechnology Acceptance Model

References

  1. Sang, S.; Lee, J.; Lee, J. E-government adoption in ASEAN: The case of Cambodia. Internet Res. 2009, 19, 517–534. [Google Scholar] [CrossRef]
  2. AlKalbani, A.; Deng, H.; Kam, B. Organisational security culture and information security compliance for e-government development: The moderating effect of social pressure. In Proceedings of the 19th Pacific Asia Conference on Information Systems, Singapore, 6–9 July 2015; pp. 1–11. [Google Scholar]
  3. Deng, H.; Karunasena, K.; Xu, W. Evaluating the performance of e-government in developing countries: A public value perspective. Internet Res. 2018, 28, 169–190. [Google Scholar] [CrossRef]
  4. Mergel, I.; Edelmann, N.; Haug, N. Defining digital transformation: Results from expert interviews. Gov. Inf. Q. 2019, 36, 101385. [Google Scholar] [CrossRef]
  5. Alcaide-Muñoz, L.; Bolívar, M.P.R. Understanding e-government research: A perspective from the information and library science field of knowledge. Internet Res. 2015, 4, 633–673. [Google Scholar]
  6. Kunstelj, M.; Vintar, M. Evaluating the progress of e-government development: A critical analysis. Inf. Polity 2004, 9, 131–148. [Google Scholar] [CrossRef]
  7. Kearns, I. Public Value and E-Government; Institute for Public Policy Research: London, UK, 2004. [Google Scholar]
  8. European eGovernment Action Plan. 2019. Available online: https://ec.europa.eu/digital-single-market/en/policies/egovernment (accessed on 7 September 2020).
  9. Golubeva, A. Evaluation of regional government portal on the basis of public value concept: Case study from Russian Federation. Proc. ACM Int. Conf. 2007, 232, 394–397. [Google Scholar]
  10. Carrara, W. Value Creation Analysis for Government Transformation Projects; Technical Report; Ministry of Budget, Public Accounts and Civil Service: Paris, France, 2007. [Google Scholar]
  11. Liu, J.; Derzsi, Z.; Raus, M.; Kipp, A. E-government project evaluation: An integrated framework. Lect. Notes Comput. Sci. 2008, 5184, 85–97. [Google Scholar]
  12. Harmon, M.M.; Mayer, R.T. Organization Theory for Public Administration; Little Brown and Company: Boston, MA, USA, 1986. [Google Scholar]
  13. Forsund, F.R. Measuring effectiveness of production in the public sector. Omega 2017, 73, 93–103. [Google Scholar] [CrossRef]
  14. Zupic, I.; Čater, T. Bibliometric Methods in Management and Organization. Organ. Res. Methods 2015, 18, 429–472. [Google Scholar] [CrossRef]
  15. Kitchenham, B.; Brereton, P. A systematic review of systematic review process research in software engineering. Inf. Softw. Technol. 2013, 55, 2049–2075. [Google Scholar] [CrossRef]
  16. Methley, A.M.; Campbell, S.; Chew-Graham, C.; McNally, R.; Cheraghi-Sohi, S. PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv. Res. 2014, 579, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Dieste, O.; Padua, A.G. Developing Search Strategies for Detecting Relevant Experiments for Systematic Reviews. In Proceedings of the First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), Madrid, Spain, 20–21 September 2007; pp. 215–224. [Google Scholar]
  18. Davis, F. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results; Sloan School of Management, Massachusetts Institute of Technology: Cambridge, MA, USA, 1986. [Google Scholar]
  19. Ray, D.; Dash, S.; Gupta, M. A critical survey of selected government interoperability frameworks. Transform. Gov. People Process D Policy 2011, 5, 114–142. [Google Scholar] [CrossRef]
  20. Rana, N.P.; Dwivedi, Y.K.; Lal, B.; Williams, M.D.; Clement, M. Citizens’ adoption of an electronic government system: Towards a unified view. Inf. Syst. Front. 2017, 19, 1572–9419. [Google Scholar] [CrossRef] [Green Version]
  21. Alotaibi, S.; Roussinov, D. User acceptance of m-government services in Saudi Arabia: An SEM approach. In Proceedings of the 17th European Conference on Digital Government (ECDG), Lisbon, Portugal, 12 June 2017. [Google Scholar]
  22. Almalki, O.; Duan, Y.; Frommholz, I. Developing a Conceptual Framework to Evaluate E-Government Portals’ Success. In Proceedings of the 13th European Conference on e-Government, Como, Italy, 13–14 June 2013. [Google Scholar]
  23. Sigwejo, A.; Pather, S. A Citizen-Centric Framework For Assessing E-Government Effectiveness. Electron. J. Inf. Syst. Dev. Ctries. 2016, 74, 1–27. [Google Scholar] [CrossRef] [Green Version]
  24. Alomari, M.; Woods, P.; Sandhu, K. Predictors for e-government adoption in Jordan: Deployment of an empirical evaluation based on a citizen-centric approach. Inf. Technol. People 2012, 25, 207–234. [Google Scholar] [CrossRef] [Green Version]
  25. Delone, W.; McLean, E. Information Systems Success Revisited. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 10 January 2002; Volume 8, p. 238. [Google Scholar] [CrossRef]
  26. Kim, H.; Chan, H.; Gupta, S. Value-based adoption of mobile internet: An empirical investigation. Decis. Support Syst. 2007, 43, 111–126. [Google Scholar] [CrossRef]
  27. Ba, S.; Johansson, W. An exploratory study of the impact of e-service process on online customer satisfaction. Prod. Oper. Manag. 2008, 17, 107–119. [Google Scholar] [CrossRef]
  28. Uthaman, V.S.; Ramankutty, V. E-Governance Service Quality of Common Service Centers: A Review of Existing Models and Key Dimensions. In Proceedings of the 10th International Conference on Theory and Practice of Electronic Governance, New Delhi, India, 7–9 March 2017; pp. 540–541. [Google Scholar]
  29. Wong, M.S.; Jackson, S. User satisfaction evaluation of Malaysian e-government education services. In Proceedings of the 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), Funchal, Portugal, 27–29 June 2017; pp. 531–537. [Google Scholar]
  30. Buckley, J. E-service quality and the public sector. Manag. Serv. Qual. Int. J. 2003, 13, 453–462. [Google Scholar] [CrossRef]
  31. Taherdoost, H. Understanding of e-service security dimensions and its effect on quality and intention to use. Inf. Comput. Secur. 2017, 25, 535–559. [Google Scholar] [CrossRef]
  32. Shajari, M.; Ismail, Z. Key Factors Influencing the Adoption of E-government in Iran. In Proceedings of the 2011 Fourth International Conference on Information and Computing, Phuket Island, Thailand, 25–27 April 2011; pp. 457–460. [Google Scholar] [CrossRef] [Green Version]
  33. Roy, M.C.; Chartier, A.; Crête, J.; Poulin, D. Factors influencing e-government use in non-urban areas. Electron. Commer. Res. 2015, 15. [Google Scholar] [CrossRef]
  34. Goldfinch, S.; Gauld, R.; Herbison, P. The participation divide? Political participation, trust in government, and e-government in Australia and New Zealand. Aust. J. Public Adm. 2009, 68, 333–350. [Google Scholar] [CrossRef]
  35. Rufin, R.; Medina, C.; Figueroa, J. Moderating Factors in Adopting Local e-Government in Spain. Local Gov. Stud. 2011, 38, 1–19. [Google Scholar] [CrossRef]
  36. Ancarani, A. Towards quality e-service in the public sector: The evolution of web sites in the local public service sector. Manag. Serv. Qual. Int. J. 2005, 15, 6–23. [Google Scholar] [CrossRef]
  37. Loukis, E.; Pazalos, K.; Salagara, A. Transforming e-services evaluation data into business analytics using value models. Electron. Commer. Res. Appl. 2012, 11, 129–141. [Google Scholar] [CrossRef] [Green Version]
  38. European Commission. Web-Based Survey on Electronic Public Services; Cap Gemini and Ernst and Young Report; European Commission: Brussels, Belgium, 2001. [Google Scholar]
  39. Asmuß, B. What Do People Expect from Public Services? Requests in Public Service Encounters. HERMES-Lang. Commun. Bus. 2017, 20, 65. [Google Scholar] [CrossRef] [Green Version]
  40. Bertot, J.C.; Jaeger, P.T.; Grimes, J.M. Using ICTs to create a culture of transparency: E-government and social media as openness and anti-corruption tools for societies. Gov. Inf. Q. 2010, 27, 264–271. [Google Scholar] [CrossRef]
  41. Moreno, L.; Martínez, P.; Muguerza, J.; Abascal, J. Support resource based on standards for accessible e-Government transactional services. Comput. Stand. Interfaces 2018, 58, 146–157. [Google Scholar] [CrossRef]
  42. Ylinen, M.; Pekkola, S. A Process Model for Public Sector It Management to Answer the Needs of Digital Transformation. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019. [Google Scholar]
  43. Saxena, S. Factors influencing perceptions on corruption in public service delivery via e-government platform. Foresight 2017, 19, 628–646. [Google Scholar] [CrossRef]
  44. Birben, U.; Gençay, G. Public Interest versus Forests. CERNE 2018, 24, 360–368. [Google Scholar] [CrossRef] [Green Version]
  45. Hodgkinson, I.R.; Hannibal, C.; Keating, B.W.; Buxton, R.C.; Bateman, N. Toward a public service management: Past, present, and future directions. J. Serv. Manag. 2017, 28, 998–1023. [Google Scholar] [CrossRef]
  46. Twizeyimana, J.D.; Andersson, A. The public value of E-Government—A literature review. Gov. Inf. Q. 2019, 36, 167–178. [Google Scholar] [CrossRef]
  47. Jennifer, R. An analysis of the e-service literature: Towards a research agenda. Internet Res. 2006, 16, 1066–2243. [Google Scholar]
  48. Nardi, B.A. The use of ethnographic methods in design and evaluation. In Handbook of Human-Computer Interaction; North-Holland: Amsterdam, The Netherlands, 1997; pp. 361–366. [Google Scholar]
  49. Karunasena, K.; Deng, H. Critical factors for evaluating the public value of e-government in Sri Lanka. Gov. Inf. Q. 2012, 29, 76–84. [Google Scholar] [CrossRef]
Figure 1. Evolution of the number of papers retrieved from the SCOPUS database with the string in Table 1.
Figure 1. Evolution of the number of papers retrieved from the SCOPUS database with the string in Table 1.
Information 11 00472 g001
Figure 2. Top ten authors with the highest number of documents retrieved.
Figure 2. Top ten authors with the highest number of documents retrieved.
Information 11 00472 g002
Figure 3. Classification of the documents retrieved by (a) type of publication and (b) area.
Figure 3. Classification of the documents retrieved by (a) type of publication and (b) area.
Information 11 00472 g003
Figure 4. Co-occurrence of words in the documents retrieved.
Figure 4. Co-occurrence of words in the documents retrieved.
Information 11 00472 g004
Figure 5. Co-citation of publications in the documents retrieved.
Figure 5. Co-citation of publications in the documents retrieved.
Information 11 00472 g005
Figure 6. Technology Acceptance Model (TAM).
Figure 6. Technology Acceptance Model (TAM).
Information 11 00472 g006
Figure 7. Average scores obtained for each indicator.
Figure 7. Average scores obtained for each indicator.
Information 11 00472 g007
Table 1. Initial search string.
Table 1. Initial search string.
Query String
TITLE-ABS-KEY ((e-gov OR e-government OR e-governance OR “digital government” OR “civil service”) AND (effectiveness) AND (definition OR concept OR metric OR indicator OR measurement OR scale OR “impact factor” OR variable OR parameter OR evaluation) AND (government OR citizen OR user OR organization OR enterprise OR business))
Table 2. Final search string.
Table 2. Final search string.
Query String
TITLE-ABS-KEY ((e-gov OR e-government OR e-governance OR “digital government” OR “civil service”) AND (effectiveness OR utility OR usefulness) AND (definition OR concept OR metric OR indicator OR measurement OR scale OR “impact factor” OR variable OR parameter OR evaluation) AND (government OR citizen OR user OR organization OR enterprise OR business)) AND (LIMIT-TO (PUBYEAR,2020) OR LIMIT-TO (PUBYEAR,2019) OR LIMIT-TO (PUBYEAR,2018) OR LIMIT-TO (PUBYEAR,2017) OR LIMIT-TO (PUBYEAR,2016) OR LIMIT-TO (PUBYEAR,2015) OR LIMIT-TO (PUBYEAR,2014) OR LIMIT-TO (PUBYEAR,2013) OR LIMIT-TO (PUBYEAR,2012) OR LIMIT-TO (PUBYEAR,2011) OR LIMIT-TO (PUBYEAR,2010) OR LIMIT-TO (PUBYEAR,2009)) AND (LIMIT-TO (PUBSTAGE,“final”) OR LIMIT-TO (PUBSTAGE,“aip”)) AND (LIMIT-TO (DOCTYPE,“cp”) OR LIMIT-TO (DOCTYPE,“ar”)) AND (LIMIT-TO (SUBJAREA,“COMP”) OR LIMIT-TO (SUBJAREA,“SOCI”) OR LIMIT-TO (SUBJAREA,“BUSI”) OR LIMIT-TO (SUBJAREA,“ENGI”) OR LIMIT-TO (SUBJAREA,“DECI”)) AND (LIMIT-TO (LANGUAGE,“English”) OR LIMIT-TO (LANGUAGE,“Portuguese”)) AND (LIMIT-TO (SRCTYPE,“p”) OR LIMIT-TO (SRCTYPE,“j”))
Table 3. Most frequent indicators mapped by the literature review.
Table 3. Most frequent indicators mapped by the literature review.
IndicatorFrequency
Ease of Use49
Useful24
Simple16
Trustworthy12
Available11
Understandable10
Consistent10
Fast09
Table 4. Questionnaire applied for a survey of perception with experts.
Table 4. Questionnaire applied for a survey of perception with experts.
The Effectiveness of Digital Public Services
Is Related to the Following Indicators:
Ease of use
Simplicity of use
Usefulness of the service
Service reliability
Service availability
Clarity of the information provided
Consistency of layout of the service
Agility in delivery
Table 5. Cronbach’s Alpha, KMO and Bartlett tests.
Table 5. Cronbach’s Alpha, KMO and Bartlett tests.
TestValue
Cronbach’s Alpha0.941
Kaiser–Meyer–Olkin0.858
Bartlett’s sphericity testApprox. Chi-square399.914
gl45
Sig.0.000
Table 6. Paired t-test between ‘useful’ and the other indicators: p refers to the statistical significance and r is the practical significance.
Table 6. Paired t-test between ‘useful’ and the other indicators: p refers to the statistical significance and r is the practical significance.
IndicatorUseful
tpr
Ease of Use1.9140.0620.274
Trustworthy0.9100.3680.134
Available1.5290.1330.222
Simple1.9460.0580.278
Consistent2.3160.0250.326
Fast2.7620.0080.380
Understandable2.4580.0180.344
Table 7. Principal component analysis.
Table 7. Principal component analysis.
IndicatorFactor 1Factor 2
Understandable0.929
Ease of use0.919
Fast0.878
Simple0.871
Available0.838
Trustworthy0.783
Consistent0.7560.474
Useful 0.970
Variance Explained64.42%18.5%

Share and Cite

MDPI and ACS Style

Pedrosa, G.V.; Kosloski, R.A.D.; Menezes, V.G.d.; Iwama, G.Y.; Silva, W.C.M.P.d.; Figueiredo, R.M.d.C. A Systematic Review of Indicators for Evaluating the Effectiveness of Digital Public Services. Information 2020, 11, 472. https://doi.org/10.3390/info11100472

AMA Style

Pedrosa GV, Kosloski RAD, Menezes VGd, Iwama GY, Silva WCMPd, Figueiredo RMdC. A Systematic Review of Indicators for Evaluating the Effectiveness of Digital Public Services. Information. 2020; 11(10):472. https://doi.org/10.3390/info11100472

Chicago/Turabian Style

Pedrosa, Glauco Vitor, Ricardo A. D. Kosloski, Vitor G. de Menezes, Gabriela Y. Iwama, Wander C. M. P. da Silva, and Rejane M. da C. Figueiredo. 2020. "A Systematic Review of Indicators for Evaluating the Effectiveness of Digital Public Services" Information 11, no. 10: 472. https://doi.org/10.3390/info11100472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop