Next Article in Journal
Which Direction to Take Further Research on the Impacts of Telomere Attrition on Aging, Age-Related Diseases, and Overall Healthcare Expenditures
Next Article in Special Issue
Exploring Blockchain Technology for Chain of Custody Control in Physical Evidence: A Systematic Literature Review
Previous Article in Journal
“Decoding” Policy Perspectives: Structural Topic Modeling of European Central Bankers’ Speeches
Previous Article in Special Issue
A Conceptual Model to Share Resources and Align Goals: Building Blockchain Application to Support Care Continuity Outside a Hospital
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decoding Decentralized Autonomous Organizations: A Content Analysis Approach to Understanding Scoring Platforms

by
Christian Ziegler
* and
Syeda Rabab Zehra
*
TUM School of Management, Technical University of Munich, 80333 Munich, Germany
*
Authors to whom correspondence should be addressed.
J. Risk Financial Manag. 2023, 16(7), 330; https://doi.org/10.3390/jrfm16070330
Submission received: 24 May 2023 / Revised: 3 July 2023 / Accepted: 11 July 2023 / Published: 13 July 2023

Abstract

:
This paper evaluates the scoring platforms for decentralized autonomous organization (DAO), examining their methodologies and highlighting their strengths and limitations. Using content analysis, we scrutinize the scoring methodologies of the Prime Rating, DAO Meter, and DeFi Safety platforms, evaluating code, documentation, security, team composition, governance, and regulatory compliance. We analyze the underlying assumptions and data sources relied upon by these platforms, using a content analysis approach. Our investigation furnishes valuable information for stakeholders, aiming to evaluate or enhance DAO scoring methodologies used by scholars and practitioners in the finance and blockchain fields. By contributing to a more rigorous understanding of DAO performance assessment, this paper supports informed decision making and promotes the development of a dependable and efficient scoring system for the decentralized financial ecosystem.

1. Introduction

Assessment of decentralized autonomous organizations (DAOs) is relevant for a range of stakeholders. Investors want to know if buying a DAO token would be a profitable investment. People contributing to DAOs want to know if it is worth investing their time in the organization, or if the DAO is likely to fail. They also want to know what parts of the DAO can be improved by tweaking specific parameters or adding functionality to make it more mature or its treasury safer. Governors of DAOs might need guidelines regarding what constitutes a good decision if they are changing the inner workings of their organizations. A DAO user (e.g., a decentralized finance platform, stablecoin, or oracle), needs to know how trustworthy and secure the product is and what the chances are that oversights in the design of the decentralized product might lead to a loss of funds. Regulators must know how to design rules for the minimal viable structures that a regulated DAO might require in the future.
DAOs are increasingly being studied from a scientific perspective. For example, Laturnus (2023) conducted a cross-sectional regression (from 2017 to 2022), examining 2377 proposals analyzing voting, ownership, funds, and business for transactional data to evaluate the performance of DAOs. Wang et al. (2019) addressed the security and privacy challenges of DAOs by proposing a reference model to identify future trends in DAOs. Rikken et al. (2021) reviewed and analyzed 1859 DAOs to produce a systematic definition of these organizations and to developed a governance framework for the blockchain and DAOs. Liu et al. (2021) studied governance in cutting-edge DAOs, highlighting problems and their solutions.
DAOs enable participants to remain anonymous or pseudonymous while participating in transactions. Admission to a DAO does not require permission from any central body, which makes it easier for individuals to participate. They are operated on smart contracts based on code, reducing the management and maintenance costs of control systems (Baninemeh et al. 2023).
Even though DAOs have come a long way, they are still in the early stages of development (Schneider et al. 2020). They have the potential to displace centralized intermediaries in various fields that call for complicated coordination, such as asset ownership monitoring, trade finance, the provision of digital identities, and supply chain traceability (Hsieh et al. 2018). DAOs could fundamentally alter how organizations, markets, industries, and businesses function because their decentralization offers transparency and does not require centralized intermediation between the parties for decision making (Bellavitis et al. 2023).
The primary objective of this paper is to analyze and compare the methodologies of three major scoring platforms—Prime Rating, DAO Meter, and DeFi Safety—that assess DAOs. By exploring the unique scoring mechanisms and weightages employed by each platform, this study seeks to understand the implicit priorities each service places on various aspects of DAOs.
The paper is structured as follows. Section 2 provides the theoretical background and studies conducted on DAOs, along with an overview of the research questions. Section 3 describes the applied research methods we used to conduct our research and analyze the platforms. Section 4 details how we applied our methods. Section 5 discusses our findings about the DAO scoring platforms that we analyzed. Section 6 presents the conclusions. The limitations of the research and future research opportunities are discussed in Section 7.

2. Related Work and Research Questions

Several studies have contributed useful research on DAOs. Faqir-Rhazoui et al. (2021) compared the three major platforms that create and manage DAOs, namely, Aragon, DAOstack, and DAOhaus. They compared growth over time, activity over time, voting system, and funds by analyzing the data from 72,320 platform users and 2353 DAO communities, extracting the data from the primary public Ethereum network and xDAI, which is the layer 2 scaling solution of Ethereum. They found significant variance among the three platforms in all four quantitative metrics. Lommers et al. (2022a) presented an accounting framework for DAOs using double-entry accounting procedures and noted that there is currently no framework for reporting DAOs transactions. Baninemeh et al. (2023) researched the DAO platform selection problem using a multi-criteria decision-making model to evaluate different alternatives and criteria for selecting the most suitable platform. They conducted three case studies of DAOs (dOrg, SecureSECO, and Aratoo), to evaluate the decision model’s performance.
Fritsch et al. (2022) researched the voting power of the three most important DAO governance systems developed on the Ethereum blockchain: Compound, Uniswap, and ENS. They investigated who possessed the authority regarding voting rights and the driving factors behind the governance decisions by analyzing governance token holders’ data, reviewing proposals, and reaching out to delegates. They found that these DAOs mostly voted with the majority, despite having a substantial number of delegates, thus not exercising their power.
Wang et al. (2022) conducted an empirical study of the DAOs generated and managed on Snapshot, using data collected from Snapshot and examining the basic concept of DAOs and their operating systems. They found that most of the protocols were in English, which restricted participation by non-English speakers and participants in non-English-speaking areas.
Park et al. (2023) conducted a content analysis of big data related to DAOs using text mining and topic modeling based on Latent Dirichlet Allocation (LDA). They analyzed 3,885,266 aggregated tweets from Twitter that used the hashtag #DAO and Reddit with the term “DAO” used in the content. They identified the top 100 keywords and 20 specific theme-based keywords on NFTs, finance, gaming, and fundraising from Twitter and Reddit. Their analysis emphasizes evaluating the landscape of DAOs, along with their effect on different industries.
Lommers et al. (2022b) presented the valuation framework for DAOs by developing preliminary DAO-native valuation concepts. They argued that DAO token valuation can be conducted by using either of the two approaches, the fundamental valuation approach or the comparative analysis approach. The fundamental valuation approach allows the evaluation of the DAO token based on the fundamentals, whereas, in the comparative approach, DAO tokens are evaluated based on the metrics. Implementing their framework would help evaluate the DAOs’ performance in generating value for token stakeholders, as well as promote accountability among the development teams associated with the DAO.
Goldberg and Schär (2023) investigated the impact and nature of voters in DAOs using data from 1414 governance proposals. They found that the disproportionate distribution of voting power could lead to several governance and transparency challenges.
Practitioners and academics have developed various scoring methodologies for DAOs, such as Zizi (2021), DeepDAO (2023), Adjovu (2021), Axelsen (2022), Prime Rating (2023), Baserank (2023), DeFi Safety (2023), DAO Meter (2023), Regner (2022), and Mattila et al. (2022). However, no study has investigated the details of existing scoring methods for DAOs. Therefore, we formulate the following research questions:
  • RQ1: What methodologies are being used by DAO rating platforms?
  • RQ2: What are the similarities and differences among the DAO rating platforms?

3. Methodology

We take a hybrid approach to our research, following the systematic literature review methods explained by Kitchenham (2004) and the qualitative content analysis described by Krippendorff (2019). We conduct our systematic review in three stages: planning the review, conducting the review, and reporting the results. We include both scientific and gray literature. We focus on the methodology reports of the DAO scoring platforms identified in the review for our qualitative content analysis.
We follow the suggestions of Kitchenham (2004) to structure our literature review. In the first stage of the review, the aim of the research is carried out by justifying the need for a review. Next, a review protocol is developed that describes the method for performing a review and the key factors that should be considered. This involves conducting background research on the topic, developing the research questions, developing the strategy to conduct the research using appropriate keywords, and finding authentic and reliable data sources. Next, we identify the selection criteria for inclusion and exclusion of the gathered data resources and develop a quality assessment checklist to ensure that correct and relatable literature is gathered for the topic. After assessing the gathered data and literature, we identify the methodology (qualitative or quantitative) for summarizing the data (Kitchenham 2004). The second step involves conducting and documenting the review based on the criteria established in the planning stage. The data must be presented in a suitable format that allows readers to understand and interpret them. The third step is to report the review, following the technical reporting structure (Kitchenham 2004). We dismiss the third step, and instead use the collected articles and documents for the content analysis.
Following the content analysis methodology of Krippendorff (2019), we initially segment our data through a unitization process. The individual reports constitute our sampling units, while the scoring categories and subcategories within these reports are the recording units. The contextual units provide the necessary backdrop for these categories and subcategories.
Subsequently, we construct a coding scheme (report, category, subcategory, score). This provides a structured framework for translating the raw data into a format conducive to analysis. This coding process is paramount in facilitating the subsequent phase of data examination.
In the analysis stage, we scrutinize the coded data to identify patterns, similarities, and disparities in DAOs scoring methodologies across the different reports. This comprehensive and systematic approach provides a solid foundation for our empirical investigation, allowing us to derive meaningful insights from the content.

4. Application of the Methodology

As a first step, we identify the need for conducting this research. We determine why a systematic review is necessary for researching DAO scoring platforms. Taking an investor and DAO member’s perspective, we investigate the need to formulate measures for assessing whether DAOs are secure and valuable and develop our research questions.
We then search the literature, using the primary resources available on the internet, looking for the articles and research papers on IEEE, Google Scholar, Web of Science, Social Science Research Network, and ScienceDirect using keywords like “DAO scoring platform, “ranking of DAOs platform, “framework for scoring DAO, and “DAO analytics platform”. We also review the gray literature such as articles posted on Medium related to the research conducted on DAO scoring platforms.
We identify 26 scientific articles and 10 articles submitted by practitioners for evaluation. From these, we identify three platforms suitable for our research. We review the methodologies of these platforms, drawing on their white papers, as well as on material posted on the platforms or public software repositories, such as GitHub.
At this stage, we implement the next step in our systematic literature review by determining the exclusion and inclusion criteria for application to the available data. To choose among the research papers, we opt for published papers with a date of publication later than 2017 and authored by well-known, reputed researchers who have conducted other research related to DAOs. To identify the platforms considered for this research, we use the following criteria:
  • Proprietary scores and rank must be available online.
  • Detailed methodology explaining how the score is calculated must be available.
  • The scores must be visible to the public on the dashboard or the websites.
  • The data sources for calculating the score must be mentioned.
  • The number of ranked DAOs on the platforms must be greater than 30.
We employ a relevance sampling technique to systematically lower the number of units required to be considered for analysis (Krippendorff 2019). The criteria are:
  • Platforms are transparent in their scoring approach.
  • Platforms ensured the availability of the data and its reliability.
We decide on these criteria, as they promote trust and accountability in evaluating DAOs.
We evaluate nine platforms: Karma Score, The DAO Transparency Index, DeepDAO, DappRadar, LunarCrush, Baserank, Prime Rating, DeFi Safety, and DAO Meter. Only three of the platforms provided a detailed methodology. We ended up with DAO Meter, Defi Safety, and Prime Rating as our sampling units. We collected 98 sampling units in total.
As the last step in our systematic research, we use the data in our content analysis. We transform our data according to our coding scheme for the content analysis. Table 1 provides examples of the coding.

5. Results

Before providing details about the platforms shortlisted for this research, we briefly discuss the platforms we eliminated from our research.
Karma Score1 is a reputation system for DAO contributors, not a DAO platform. It aggregates the activity of each DAO contributor and generates a reputation score which is presented on a dashboard. It has a detailed methodology for calculating the Karma Score, but we removed this platform from our research, since it only calculates the score for contributors.
The DAO Transparency Index2 is currently building the DAO Index, an analytical tool based on a theoretical foundation, to assess how a DAO implements a set of core organizing principles. The DAO index consists of three parts, a self-assessment questionnaire, an open rating database, and a rating table. The work, currently at the questionnaire stage, is still in progress, so we did not research the methodology of the index further.
DeepDAO (2023) compiles a range of qualitative and quantitative statistics relating to DAOs by aggregating, listing, and analyzing financial and governance data. The data are presented on an interactive dashboard accessible to the public. However, we decided to discontinue further research on DeepDAO because it does not show a customized score.
DappRadar3 tracks different decentralized apps (Dapps) across 40+ blockchains in various categories, including DeFi, NFT, and Games. It tracks live user data, transaction volume, and other financial parameters, but does not calculate a unique score, so we omitted it from our research.
LunarCrush4 is a trading platform that possesses twenty metrics, including the Galaxy Score and Alt Rank. Even though the Galaxy Score is a proprietary score, the methodology for calculation is not detailed, and there is no indication of how the other metrics are weighed in calculating both ranks. This led us to excluding LunarCrush from our research.
Baserank (2023) is a crowdsourced crypto asset research platform that gathers data by leveraging insights from independent analysts, rating agencies, and experienced investors. The Baserank Rating measures the risk level of a specific crypto asset on a scale of 0 to 100, with assets scored below 30 considered very risky, those scored above 70 considered the least risky, and those scored between 30 and 69 considered moderately risky. One of the main reasons for excluding Baserank from our research was that certain ratings of crypto assets are only accessible to premium members, who are charged a substantial fee to register. Additionally, multiple rating agencies are involved in reviewing and ranking the crypto assets to determine Baserank ratings, but the company’s website does not supply a standardized methodology for aggregating rankings from specific agencies.

5.1. Overview of the Selected Platforms

Prime Rating (2023) provides a permissionless framework for measuring the features and risks of open finance protocols. The rating scale, from A+ to D, is calculated by taking the average of a fundamental report and a technical report, with the former and latter each contributing 50%. The fundamental report measures the overall quality of a given open finance protocol by reviewing its value proposition, tokenomics, team, governance, and regulatory qualities (maximum score, 250 points). The technical review is created in collaboration with DeFi Safety, which evaluates the technical parameters of the protocol (maximum score, 185 points). The technical parameters include code, documentation, testing, security, and access controls. Prime Rating allows several raters to review the same protocol, thus increasing the authenticity of its ratings. The contributors who rated the protocols are identified on the website to increase transparency and trust. Prime Rating has reported on more than 70 decentralized finance protocols.
Figure 1 illustrates Prime Ratings’ process flow. It shows the role of the raters and review council. The raters review and score the protocols. The review council ensures that the rating team is credible and supervises the documents evaluated by the raters.
DeFi Safety (2023) is an independent quality and ratings organization that evaluates decentralized finance (DeFi) protocols and scores them using a framework based on transparency. The framework is based on a process quality review (PQR) document, which details every step for calculating scores. The final score of the PQR document is a percentage, calculated by dividing the total achieved points by the total possible points. The maximum point value is 270. The framework contains six major categories: smart contract and team, code documentation, testing, security, admin control, and oracles. Each category has questions that can be answered with a yes or no, or a percentage value. The questions are weighted so that each makes a specified contribution to the overall scoring. Benchmarks listed for the percentage value questions serve as guidelines for rating the answers. The PQR document also shows how scores can be improved. At present, around 250 different DeFi protocols have been rated by DeFi Safety. Figure 2 shows DeFi Safety’s process flow.
DAO Meter (2023) is a rating platform created by StableLab. It provides a framework that incorporates both qualitative and quantitative methods for scoring the maturity of DAOs and uses numerical data and statistical tools to analyze DeFi protocols. The DAO maturity scoring framework was developed through several iterations by Nickerson et al. (2017). The maximum score is 717 points, and the points are distributed among six categories: treasury, proposal, voting, community, security, and documentation. Each category contains questions that can be answered by yes or no, or category-specific criteria that are explained in the description of the categories. A separate section on the platform explains in detail how DAOs can improve their scores. DAO Meter has reviewed and ranked 30 protocols. The contributors who rated the protocols are not identified. Figure 3 depicts DAO Meter’s process flow for creating and validating a scoring model.

5.2. Score Overview

To determine the proportional representation of score metrics for three different platforms, with unique methodologies and different maximum scores, we convert the scoring metrics for each platform and their categories into percentages out of 100. Table 2 shows the scaled scores for the three platforms in percentages.
For example, the maximum score for Prime Rating is 435. One of its categories, Team, has a maximum score of 40. To convert the total score, we use this formula:
Score   out   of   100 = Score   of   a   specific   category Total   score   100
Score   for   the   Teams   category   of   Prime   Rating = 40 435 100

5.3. Comparative Analysis

5.3.1. Similarities—Common Subcategories in the Platforms

To assess the three platforms thoroughly, we evaluate the subcategories and questions in their scoring reports. To homogenize the scales for comparison, we convert the scoring metrics for common questions in the subcategories into percentages out of 100. Table 3 shows the scaled scores of subcategories for all three platforms in percentages.
We use the same conversion formula mentioned in the score overview in Section 5.2, but with the common subcategory questions mentioned in the scoring report. To convert the total score out of 100 for the questions, we use this formula:
Score   out   of   100 = Score   of   a   specific   common   subcategory T o t a l   s c o r e 100
Score   for   the   subcategory   auditing   of   DeFi   Safety = 70 315   100
All three platforms, Prime Rating, DeFi Safety, and DAO Meter, consider the anonymity of the core team to be vital, but weight this feature differently. While Prime Rating and DeFi Safety apportion approximately 3.4% and 3.2% of the rating to the core team’s public identity, DAO Meter allocates a more significant proportion of the rating, 5.7%, to this factor. DeFi Safety also assigns a significant portion of its total score to auditing (more than 20%), whereas DAO Meter allocates a mere 2.3%. In contrast, DAO Meter places a greater emphasis on the evaluation of public repositories (4.5% of the total score) than do Prime Rating and DeFi Safety, which allocate around 1% of their scores, with a minimal difference of 0.4%. DAO Meter and DeFi Safety allocate similar percentages of their scores (3.0% and 3.2%, respectively) to the explicit statement of ownership type; Prime Rating weights this slightly less, allocating 2.3% of its score.
Only DAO Meter and Prime Rating include the presence of active contributors or delegates in their scores, with DAO Meter according the parameter a higher weighting, 2.6%, compared to 1.1%, respectively. These two platforms also include admin key possession among their criteria, allocating it at 2.3% and 4.6% of their total scores, respectively. This highlights the emphasis placed on secure administration and control in the projects they rate.

5.3.2. Similarities—Common Platform Categories

Security, documentation, and team assessment are categories used by all three platforms, of which security is considered the most essential. DeFi Safety assigns it 28.6% of the total score, DAO Meter assigns it 12%, and Prime Rating assigns it 5%.
DeFi Safety and Prime Rating both emphasize the presence and significance of bug bounty programs in evaluating DAOs or DeFi projects, suggesting that they consider such programs important in the maintenance of smart contract security. They also quantify the adequacy of bug bounty programs, emphasizing the role of monetary incentives in attracting thorough code reviews from the community. DAO Meter highlights other security features of DAOs, such as security module mechanisms, security audit frequency, and whether the organization being rated has a history of catastrophic loss of funds. These shared focus areas underscore the universal importance placed on security measures and standards in evaluating and scoring decentralized financial structures and organizations. The methods for assessing and weighing these factors differs.
Documentation. All three platforms include a documentation category in their scoring systems. Documents should be easily accessible to readers. Prime Rating assigns documentation 5% of the total score, DeFi Safety assigns it 13%, and DAO Meter 19%.
The evaluation of documentation includes the availability and accessibility of white papers, financial reporting, code repositories, documentation of a given protocol’s software architecture, and other supporting documents. Prime Rating and DeFi Safety also evaluate the documentation of the cover protocol architecture, and DAO Meter requires that governance and tokenomics be documented and that the financial reporting and the source code of the product and its governance be public. DeFi Safety goes a step further and requires the code of deployed contracts to be public and fully detailed in the documentation.
Team. All of the platforms include this category, but DAO Meter calls it “community”. This category accounts for 32.2%, 14%, and 9% of the overall scores of DAO Meter, DeFi Safety, and Prime Rating, respectively. One of the questions asked by all of the platforms is whether the organizations being rated have non-anonymous development teams. Team anonymity can harm trust between the users and the management team because anonymous developers can disappear quickly, whereas those whose names are made public can be held accountable (DeFi Safety 2023).
Testing. Both Prime Rating and DeFi Safety have a testing category. It accounts for approximately 10% of Prime Rating’s and 16% of DeFi Safety’s total scores. In this category, both platforms include questions related to the testing process for code. The presence of a testing suite which is easily accessible to the general public and the availability of smart contracts are considered the most important features. The availability of instructions and guidelines for testing ensures transparency and visibility and helps in the understanding of the protocol. Also evaluated in the testing category is whether test result reports are available because they enhance the accountability of a protocol.

5.3.3. Differences

DAOs such as MakerDAO, Shapeshift, Aave, and Uniswap are ranked on the different DAO scoring platforms. The scores used by the various platforms to rank the DAOs and the features they consider most important vary. Prime Rating considers the value proposition of the protocol—the value it adds by solving a specific problem in the industry—to be one of the most important categories. The value proposition category includes questions related to the distinctive features of the protocols, including how they compare with the features of other protocols and how the protocol serves the needs of a specific market. The second important feature that Prime Rating considers is the token’s capabilities. This evaluation includes questions related to the equal distribution of the token among markets, the purpose of the token, and whether it can serve the token holders’ purposes in terms of revenue, utility, or governance.
DAO Meter evaluates preliminary discussions of protocols to identify the content or background information that led to their development. This helps to identify what problem the protocol addresses in the market. DAO Meter also evaluates security modules in the infrastructure of protocols that can protect against breaches. This category is important as it involves trust and integrity in governance.
Only DeFi Safety evaluates the possible attacks on flash loans by reviewing any available information related to this issue. Although flash loans are an essential part of DeFi protocols, the safety of investors and users must be ensured when they are used. DeFi Safety gives points to protocols that include mitigation steps in the protocol documentation.

5.3.4. General Observations

Each platform’s ranking report contains unique focus areas and applies a specific weighting system, reflecting the relative importance the platform assigns to each category. In the Prime Rating report, value proposition is given a maximum of 65 points, reflecting its importance in the evaluation. Tokenomics and governance are each assigned 60 points, also indicating their significant roles. Team and regulatory considerations account for 40 and 25 points, respectively, which emphasizes their roles, but to a lesser degree.
DAO Meter’s most heavily weighted category is community (231 points), which emphasizes community engagement and involvement in DAOs. Voting power and documentation also carry significant weight (assigned 142 and 133.5 points, respectively). Security and treasury are given 86.5 and 84 points, respectively, and proposals are assigned a modest 40 points, reflecting the relative importance of these areas.
DeFi Safety assigns the highest weight to security, allotting it 90 points out of 315, which highlights the primacy of security considerations in its assessment. Admin controls and testing carry 75 and 50 points, respectively, underscoring their significant roles. Smart contract and team and code documentation are assigned 45 and 40 points, respectively, and oracles are given the lowest weight, with 15 points.
Each report implicitly communicates its evaluative priorities by assigning weights to its exclusive focus areas. The scoring system and its weightage thus enhance the granularity and specificity of the evaluation in each report.
Figure 4 demonstrates the impact of varying scoring goals and methodologies on the evaluation of DAOs, presenting the scores of individual DAOs across the different platforms—Prime Rating, DAO Meter, and DeFi Safety. The figure thus exposes the potential range of scores a single DAO may receive under the differing evaluation criteria of each platform, while also providing a nuanced reflection of each platform’s unique areas of emphasis. Through the comparative illustration provided by Figure 4, we can discern how a DAO’s ranking can be distinctly affected by the unique evaluative approach of each scoring platform.

6. Conclusions

Our content analysis explored the scoring platforms that assess DAOs for stakeholders, investors, contributors, governors, and users. We reviewed the scoring methodologies, frameworks, and weightings of three platforms: Prime Rating, DAO Meter, and DeFi Safety. We performed a content analysis on the collected data using a scoring framework, which helped us to transform our data according to our coding scheme by grouping the data into categories. We identified similarities and differences among the three platforms by comparing the three scoring frameworks and their weightages. Although the platforms use different methodological approaches and calculations, we found that all three asked some of the same questions. These questions related to team anonymity, auditing of the protocol, the availability of open-source code, the type of treasury ownership, the presence of governance contributors, and possession of the admin keys. Although some DAOs are ranked on all three platforms (e.g., Uniswap, Aave, Compound, and Balancer), the categories that the three platforms used to evaluate DAOs are distinct. Prime Rating focuses on the solution’s novelty, its market fit, and the token’s capabilities. DAO Meter evaluates the maturity of DAOs, whereas DeFi Safety emphasizes security and bug mitigation.

7. Future Work

Researchers should examine the connection between the way in which rating platforms score DAOs and the actual performance and security of those DAOs. Currently, the scores are based chiefly on observations and qualitative factors. More reliable methods are needed as new rating platforms emerge, and they must be thoroughly reviewed and understood. Researchers should develop a scoring framework that is based on hard evidence. The scores given to DAOs by existing scoring platforms can be used as starting points, and the performance or security of DAOs can be measured over time.

Author Contributions

Conceptualization, C.Z.; methodology, C.Z.; validation, C.Z. and S.R.Z.; formal analysis, S.R.Z.; investigation, S.R.Z.; resources, S.R.Z.; data curation, S.R.Z.; writing—original draft preparation, S.R.Z.; writing—review and editing, C.Z.; visualization, CZ.; supervision, C.Z.; project administration, C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

We exclusively used public data sources, which can be found in the References section of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Note

1
https://Karmahq.xyz, accessed on 19 May 2023.
2
3
https://dappradar.com, accessed on 19 May 2023.
4
https://lunarcrush.com, accessed on 19 May 2023.

References

  1. Adjovu, Charles. 2021. DAO Index: Certification Marks, First Analysis, and Open Rating Systems. Available online: https://medium.com/@charles.adjovu/dao-index-certification-marks-first-analysis-and-open-rating-systems-886ba3acf559 (accessed on 19 May 2023).
  2. Axelsen, Henrik. 2022. Research Summary: When Is a DAO Decentralized? Available online: https://www.smartcontractresearch.org/t/research-summary-when-is-a-dao-decentralized/1903 (accessed on 19 May 2023).
  3. Baninemeh, E., S. Farshidi, and S. Jansen. 2023. A decision model for decentralized autonomous organization platform selection: Three industry case studies. Blockchain: Research and Applications 4: 100127. [Google Scholar] [CrossRef]
  4. Baserank. 2023. Crypto Assets Review. Available online: https://baserank.io/market-feed/reviews (accessed on 19 May 2023).
  5. Bellavitis, Cristiano, Christian Fisch, and Paul P. Momtaz. 2023. The rise of decentralized autonomous organizations (DAOs): A first empirical glimpse. Venture Capital 25: 187–203. [Google Scholar] [CrossRef]
  6. DAO Meter. 2023. A Rating System for your DAO. Available online: http://www.daometer.xyz/ (accessed on 19 May 2023).
  7. DeepDAO. 2023. DAO Participation Score. Available online: https://deepdao.gitbook.io/deepdao-products/governance-list-the-top-daoists/dao-participation-score (accessed on 19 May 2023).
  8. Faqir-Rhazoui, Youssef, Javier Arroyo, and Samer Hassan. 2021. A comparative analysis of the platforms for decentralized autonomous organizations in the Ethereum blockchain. Journal of Internet Services and Applications 12. [Google Scholar] [CrossRef]
  9. Fritsch, Robin, Marino Müller, and Roger Wattenhofer. 2022. Analyzing Voting Power in Decentralized Governance: Who controls DAOs? arXiv arXiv:2204.01176. [Google Scholar] [CrossRef]
  10. Goldberg, Mitchell, and Fabian Schär. 2023. Metaverse governance: An empirical analysis of voting within Decentralized Autonomous Organizations. Journal of Business Research 160: 113764. [Google Scholar] [CrossRef]
  11. Hsieh, Ying-Ying, Jean-Philippe Vergne, Philip Anderson, Karim Lakhani, and Markus Reitzig. 2018. Bitcoin and the rise of decentralized autonomous organizations. Journal of Organization Design 7: 14. [Google Scholar] [CrossRef] [Green Version]
  12. Kitchenham, Barbara. 2004. Procedures for Performing Systematic Reviews. Keele: Keele University. [Google Scholar]
  13. Krippendorff, Klaus. 2019. Content Analysis: An Introduction to Its Methodology. Thousand Oaks: SAGE Publications, Inc. [Google Scholar]
  14. Laturnus, Valerie. 2023. The Economics of Decentralized Autonomous Organizations. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  15. Liu, Lu, Sicong Zhou, Huawei Huang, and Zibin Zheng. 2021. From Technology to Society: An Overview of Blockchain-Based DAO. IEEE Open Journal of the Computer Society 2: 204–15. [Google Scholar] [CrossRef]
  16. Lommers, Kristof, Muzammil Ghanchi, Kevin Ngo, Qirong Song, and Jiahua Xu. 2022a. DAO Accounting. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  17. Lommers, Kristof, Jiahua Xu, and Teng Andrea Xu. 2022b. A Framework for DAO Token Valuation. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  18. Mattila, Vilma, Prateek Dwivedi, Pratik Gauri, and Md Ahbab. 2022. Mapping out the DAO Ecosystem and Assessing DAO Autonomy. International Journal of Computer Science and Information Technology Research 10: 30–34. [Google Scholar]
  19. Nickerson, Robert C., Upkar Varshney, and Jan Muntermann. 2017. A method for taxonomy development and its application in information systems. European Journal of Information Systems 22: 336–59. [Google Scholar] [CrossRef]
  20. Park, Hyejin, Ivan Ureta, and Boyoung Kim. 2023. Trend Analysis of Decentralized Autonomous Organization Using Big Data Analytics. Information 14: 326. [Google Scholar] [CrossRef]
  21. Prime Rating. 2023. Prime Rating. Available online: https://www.prime.xyz/rating-defi (accessed on 19 May 2023).
  22. Regner, Ferdinand. 2022. How to Assess a DAO? Available online: https://medium.com/smape-capital/how-to-assess-a-dao-11e79988b87e (accessed on 19 May 2023).
  23. Rikken, Olivier, Marijn Janssen, and Zenlin Kwee. 2021. The Ins and Outs of Decentralized Autonomous Organizations (Daos). SSRN Electronic Journal. [Google Scholar] [CrossRef]
  24. DeFi Safety. 2023. The Mark of Quality: Your #1 Source for DeFi Quality Ratings and Certifications. Available online: https://defisafety.com/ (accessed on 19 May 2023).
  25. Schneider, Bettina, Ruben Ballesteros, Pascal Moriggl, and Petra M. Asprion. 2020. Decentralized Autonomous Organizations-Evolution, Challenges, and Opportunities. Available online: https://www.google.com.hk/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjowtrrxIqAAxUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fceur-ws.org%2FVol-3298%2Fpaper_BES_2899.pdf&psig=AOvVaw0s4zjejks_J7nI_JjtTyld&ust=1689298273770458&opi=89978449 (accessed on 19 May 2023).
  26. Wang, Shuai, Wenwen Ding, Juanjuan Li, Yong Yuan, Liwei Ouyang, and Fei-Yue Wang. 2019. Decentralized Autonomous Organizations: Concept, Model, and Applications. IEEE Transactions on Computational Social Systems 6: 870–78. [Google Scholar] [CrossRef]
  27. Wang, Qin, Guangsheng Yu, Yilin Sai, Caijun Sun, Lam Duc Nguyen, Sherry Xu, and Shiping Chen. 2022. An Empirical Study on Snapshot DAOs. arXiv arXiv:2211.15993. [Google Scholar]
  28. Zizi, Othmane. 2021. Ranking DAOs: We Computed Their “Net Community Score” to See How They Stack Up. Available online: https://www.businessofbusiness.com/articles/ranking-daos-we-computed-their-net-community-score-to-see-how-they-stack-up/ (accessed on 19 May 2023).
Figure 1. Prime Ratings’ process flow for scoring a protocol (Prime Rating 2023).
Figure 1. Prime Ratings’ process flow for scoring a protocol (Prime Rating 2023).
Jrfm 16 00330 g001
Figure 2. DeFi Safety’s process flow for scoring a protocol (DeFi Safety 2023).
Figure 2. DeFi Safety’s process flow for scoring a protocol (DeFi Safety 2023).
Jrfm 16 00330 g002
Figure 3. Scoring methodology of DAO Meter (DAO Meter 2023).
Figure 3. Scoring methodology of DAO Meter (DAO Meter 2023).
Jrfm 16 00330 g003
Figure 4. DAOs scores on different platforms (DAO Meter 2023; DeFi Safety 2023; Prime Rating 2023).
Figure 4. DAOs scores on different platforms (DAO Meter 2023; DeFi Safety 2023; Prime Rating 2023).
Jrfm 16 00330 g004
Table 1. Examples of the data coding.
Table 1. Examples of the data coding.
ReportCategorySubcategoryScore
Prime RatingValue PropositionNovelty of the solution15/250
Prime RatingTokenomicsIs the token sufficiently distributed?15/250
Prime RatingTeamDoes the team have relevant experience?10/250
DAO MeterTreasuryTreasury type21.3/717
DAO MeterSecuritySecurity audit frequency23/717
DAO MeterCommunityCommunity stewards23/717
DeFi SafetySmart Contract and TeamAre the smart contracts easy to find?20/315
DeFi SafetyOracleIs front running mitigated by this protocol? (Y/N)2.5/315
Table 2. Scaled platform scores.
Table 2. Scaled platform scores.
Category on PlatformPrime RatingDAO MeterDeFi Safety
Team9.232.214.3
Documentation4.618.612.7
Testing10.3 15.9
Security4.612.128.6
Code11.5
Access Control11.5
Value Proposition14.9
Tokenomics13.8
Governance13.8
Regulatory Compliance5.7
Admin Controls 22.8
Oracles 4.8
Voting 19.8
Treasury 11.7
Proposal 5.6
All numbers are percentages of the total score.
Table 3. Scaled platform scores of subcategories.
Table 3. Scaled platform scores of subcategories.
Category on PlatformPrime RatingDAO MeterDeFi Safety
Team3.45.73.2
Security0.02.322.2
Code1.14.51.6
Access Control2.33.03.2
Voting1.12.50.0
Governance4.62.30.0
All numbers are percentages of the total score.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ziegler, C.; Zehra, S.R. Decoding Decentralized Autonomous Organizations: A Content Analysis Approach to Understanding Scoring Platforms. J. Risk Financial Manag. 2023, 16, 330. https://doi.org/10.3390/jrfm16070330

AMA Style

Ziegler C, Zehra SR. Decoding Decentralized Autonomous Organizations: A Content Analysis Approach to Understanding Scoring Platforms. Journal of Risk and Financial Management. 2023; 16(7):330. https://doi.org/10.3390/jrfm16070330

Chicago/Turabian Style

Ziegler, Christian, and Syeda Rabab Zehra. 2023. "Decoding Decentralized Autonomous Organizations: A Content Analysis Approach to Understanding Scoring Platforms" Journal of Risk and Financial Management 16, no. 7: 330. https://doi.org/10.3390/jrfm16070330

Article Metrics

Back to TopTop