DAOmeter: Our Research-Based Approach
Updated: Jun 6
Navigating the world of Web3 can be overwhelming, especially when it comes to managing governance. While transparency is a fundamental principle of Web3, it's not always upheld in practice. Many DAOs function as isolated communities, with limited insight into the inner workings of governance.
At StableLab, we strongly believe that transparency is key to promoting stakeholder alignment and enhancing the integrity of the space. This is why we created DAOmeter. It is a powerful new tool that provides users with valuable insights into how different DAOs operate and offers valuable insights into their organizational structures.
By analyzing key categories and providing detailed scoring, DAOmeter can help shed light on the strengths and weaknesses of different projects. This, in turn, can lead to more informed decision-making and can also help protocols stay updated on where improvements can be made (let’s get those documents in check, fens!).
Our research was conducted using both quantitative and qualitative methods and consisted of multiple reiterative stages and feedback loops to establish thirty overarching categories for measuring DAO Governance Maturity.
Our definition of DAO Governance Maturity is adapted to meet the needs of DAOs. As such, we define it as:
A quantifiable indicator of a DAO's ability to operate effectively and sustainably over the long term. A crucial element of this metric is the application of decentralized governance mechanisms that enable organizational resilience and the effective administration of resources, procedures, and human capital, for the DAO to meet its objectives.
When it comes to developing a maturity model that truly works, it's crucial to remain flexible about what it means to be “mature” in the constantly changing environment of DAOs. In fact, we explored this very topic in a recent series on our blog, where we delved into the problems of measuring DAO maturity. By incorporating insights and discoveries from those on the front lines, our model can stay flexible and adapt as needed, keeping pace with the ever-evolving landscape.
Now, let’s take a quick dive into the research process that helped develop DAOmeter.
We used different sources of documentation to gather our data. Most of the sources we used were publicly available from key platforms such as Dune Analytics, Messari, DeFi Pulse, Snapshot, Boardroom, DeepDAO, DAOmasters, and DeFiLlama. And, to ensure that our insights were well-rounded, we engaged in conversations with credible contributors from protocols such as MakerDAO, Aave, Solace as well as others.
Our research process consisted of the following eight steps:
Step 1. Preliminary Research
To make sure we were on the right track, we kicked things off by conducting a series of interviews with 10 prominent governance experts. By tapping into their collective wisdom, we
wanted to see if any overlapping patterns emerged in terms of how experts understood and defined effective governance practices. This preliminary research provided us with invaluable insights and feedback that helped us validate our research direction.
Aside from our expert interviews, we dove into a deep analysis of 19 major DeFi DAOs. By combing through their public documentation and gathering insights from their contributors, we uncovered the key governance drivers that led to their operational success. We looked into various aspects including treasury management, community engagement, voting processes, and delegation practices to paint a comprehensive picture of how these DAOs are organized. We then categorized this information into broader governance categories.
Step 2. Taxonomy Building
Our research approach relies on a common taxonomy building framework developed by Nickerson et al. (2013), which features a seven-step approach for constructing taxonomies in information systems. Following this, we laid out all the possible characteristics of DAO governance.
Step 3. Rating and Pilot Testing
We took an intuitive approach to assess each dimension on our initial list. Using a simple 1-10 scale, each member scored the dimensions independently. But we didn't stop there. We strongly believe that collaboration is key to achieving excellence. So, we got together and reviewed the scores, discussing and debating the relative importance of each dimension. We sparred over the details, challenging each other's assumptions and perspectives. Through this collaborative process, we arrived at a comprehensive evaluation framework that reflects our team's diverse expertise and collective wisdom.
We then ran a number of pilot tests with internal StableLab team members to help us validate this first list we established and to help us brainstorm additional governance categories. Our list originally contained 87 dimensions and after the interviews and ongoing discussions between team members, we reduced them to 53.
Step 4. Surveys
We then decided to proceed with surveys. After completing our initial evaluation framework and identifying a total of 53 governance characteristics, we recognized the need to streamline the process for survey takers. To prevent overburdening them by rating every single characteristic, we condensed the characteristics into 22 abstractions.
Next, we reached out to experts from 40 different DAO communities, including both DeFi and non-DeFi projects. Through a comprehensive survey, we asked them to rate the 22 abstractions on a scale of 1-10. We received a total of 60 responses from these knowledgeable experts, which served as a valuable source of information for our second iteration of scoring.
We also wanted to reward the survey participants, so we created a bounty program on the platform known as “DeWork”.
Step 5. Data Gathering
In combination with the surveys, we also collected complete data sets for 15 prominent DeFi DAOs. This data would help us further refine and inform the categories for our model.
To gather this information, we first scoured publicly available information and documents of projects. If this information was not publicly available, we reached out to members of these communities directly through public community channels.
Step 6. Computation
At this stage, our model was almost complete with the dimensions, categories, and abstractions and we assigned a rough score of importance to each dimension. However, we still needed to determine how each character would be scored in terms of its maturity in the grand scheme of things. We performed another reiteration and data refinement by assigning a score of 1-10 to each character. We then combined this with the data and metrics we gathered from the 15 DAOs to establish the final score.
Step 7. Case Studies
To further enrich our research approach, we supplemented our quantitative analysis with qualitative insights. Through a series of in-depth case studies, we extracted valuable findings from existing studies and reports in the Web3 space. By examining real-world examples and best practices in these areas, we were able to validate the importance of key governance constructs that we had identified as crucial for maturity in our model.
While the case studies did not directly influence our scoring system, they played a key role in enhancing our understanding of the complex world of Web3 governance.
Step 8. Data Processing
We continued to refine and iterate on the model by removing redundant categories or categories that were too ambiguous to work with in potential future iterations of DAOmeter.
In the end, we established thirty categories that were then grouped under the following six overarching categories:
Now that we've given you a basic overview of our research process, let's take a closer look at how our model works when it comes to actual project scoring. We'll provide a brief walk-through of our protocol review process, using MakerDAO as an example.
How We Score DAOs
Let's explore the breakdown of how we assign points for each of the overarching categories in DAOmeter. As you can observe, we give the highest weightage to the “Community” category, followed by “Voting”, “Documentation”, “Security”, “Treasury”, and lastly, “Proposal”. The final score is calculated out of a total of 100 points, which is also the maximum score.
Score out of 100
This category looks at features related to how the DAO’s community is organized. Maker DAO has community stewards, known as the “GovAlpha Core Unit”. The DAO reports on its activities weekly. We know that core contributors are paid on a schedule, where each core unit makes its own payment arrangements and overall, the protocol has regular contributors and rewards them.
Although the protocol supports anonymity, the team members of the protocol can be viewed publicly here. According to our standards, this is a good thing, since it instills greater trust in the protocol.
Community Stewards Present - Yes (10 points)
Regular Community Updates - Semi-weekly (10 points)
Contributors Payroll Type - On a schedule (10 points)
Contributors Rewarded - Yes (10 points)
Regular Contributors - Yes (10 points)
Formal Onboarding process - Multiple documentations per role (10 points)
Offboarding Process - Yes (10 points)
Working Groups Present - Yes (10 points)
Anon Core Team - No (10 points)
The total score for “Community” is 100%
The “Voting” category examines aspects like how voting is accessed in the DAO. For example, MakerDAO voters need to own tokens to be able to vote and voting is accessed through tokens. The protocol also uses a custom voting tool, known as the Governance Voting Portal and it has delegates who get compensated and who can be viewed on this dashboard.
Voting Power - Tokens owned (5 points)
Access to Voting Power - Tokens owned (5 points)
Voting Tool Type - Custom (10 points)
Delegates - Yes (10 points)
Delegates Compensated - Yes (10 points)
Public List of Delegates - Yes (10 points)
The total score for “Voting” is 83%
The final category we review is “Documentation”. Here, we check whether the financial reports of the protocol are made public, whether the code of the tooling and mechanism used for the governance process is publicly available, and if the governance process is documented. We also check to see if the DAO’s tokenomics is documented and whether the code of the protocol is open source.
Public Financial Reporting - Yes (10 points)
Public Governance Code Repository - Yes (10 points)
Documentation of Governance - Yes (10 points)
Documentation of Tokenomics - Yes (10 points)
Open Source Code - Yes (10 points)
The total score for “Documentation” is 100%
In the “Security” category we examine whether the protocol’s security module is centralized or decentralized, the frequency of the audits the protocol has, whether the protocol has experienced a major loss of funds from an exploit and whether the DAO has an admin key.
MakerDAO has a decentralized security module, known as “Emergency Shutdown” mode. Its code auditing appears to be continuous. Based on our definition of “exploitation,” MakerDAO has so far not experienced major loss of funds, and since the protocol went through a major phase of decentralization, it has no active admin keys.
Security Module - Decentralized (10 points)
Frequency of Security Audits - Continuous (10 points)
Catastrophic Loss of Funds Has Occurred - No (10 points)
Admin Key - No (10 points)
The total score for “Security” is 100%
The “Treasury” category looks at how the protocol manages its treasury. MakerDAO relies on a “governance” process and we deem “governance” to be the best practice when it comes to treasury management. By “governance,” here we refer to a treasury that is governed entirely by the community (usually token holders).
MakerDAO also uses a custom treasury software, and we also know that MakerDAO’s Strategic Finance Core Unit handles the treasury. Furthermore, we can also see that MakerDAO’s native token is below 90%, so we reward the protocol with the maximum points here based on our model.
Treasury Type - Governance (10 points)
Treasury Software - Custom (10 points)
Dedicated Persons to Treasury - Yes (10 points)
Treasury Diversification - Native below 90% (10 points)
The total score for “Treasury” is 100%
The “Proposal” category looks at whether preliminary discussions occur in forum discussions prior to voting and whether proposal data gets stored on-chain. Our model allows for both Yes and No answers for these categories. In the case of MakerDAO, preliminary discussions happen here and proposal data gets stored on-chain, so here again, we reward the protocol with maximum points.
Preliminary Discussion - Yes (10 points)
Proposal Data On-chain - Yes (10 points)
The total score for “Proposal” is 100%
Finally, when these scores are added up and scaled, MakerDAO gets a total score of 97 out of 100.
And that’s it!
If you’re curious in learning more about the nitty-gritty details behind the scoring, be sure to check out the detailed DAOmeter methodology report.
You can also view the scores for other protocols and compare them against each other by visiting the DAOmeter website.
We're confident that DAOmeter represents a major step forward in the establishment of good governance standards and practices, and we're excited to see the impact it will have on the Web3 ecosystem. Your support means a lot to us, and we would be thrilled if you could share DAOmeter with others on social media.
Stay tuned for more updates as we continue to develop this powerful new tool!