Updated 718 days ago

SecureDAO: Securing Web3 Decentralized Apps with Decentralized Verification Platform

Building a decentralized vulnerability/bug detection platform for smart contracts, Dapp, NFT, where users can provide security problems (e.g., smart contract) and clues (e.g. suspicious address or transaction ), then researchers and security providers can upload automated checkers and annotated datasets and get incentives.

  • Crypto / Web3
  • Ethereum

Similar to traditional computer programs, smart contracts may have vulnerabilities, which allow attackers to steal money from contract owners. Thus, it is necessary for developers to locate vulnerabilities before contract deployment.

To our best knowledge, existing automated analysis tools can only detect a small percentage of vulnerable contracts, and they output a large number of false-positive reports [1]. Moreover, existing tools require complex configures, which deter developers from using them.

To address these issues, we are building a decentralized vulnerability detector platform that allows researchers and security providers to upload code checkers for benefits. Note that well-written checkers can discover a large number of vulnerabilities, while poorly-written checkers may mislead developers.

Therefore, we design a checker evaluation protocol that can incorporate multiple validators to examine the quality of checkers. Specifically, validator can use public annotated datasets to test checkers and then upload results to the Ethereum blockchain. After collecting detection results, the platform can rate the quality of checkers and give the corresponding incentives (e.g., tokens) to checker writers.

In addition to checkers, we also allow users to share annotated datasets on our platform. In the above evaluation protocol, annotated datasets are crucial resources to confirm the quality of checkers. Furthermore, annotated datasets can help checker writers to improve their checkers. To our best knowledge, there are no large-scale annotated datasets for smart contracts. Thus, encouraging users to upload annotated datasets is crucial.

Typically, various datasets have different values. For instance, valuable datasets tend to be different from existing datasets, offering extra information to validate or improve checkers, while useless datasets may be similar to existing datasets, offering no information to validate or improve checkers. Therefore, it is necessary to design a dataset evaluation protocol. The key insight behind the protocol is to analyze detection controversy from detectors. Assume there are some top checkers that can detect most vulnerabilities from existing datasets; if these checkers output controversial detection results on an uploaded dataset, it means that the uploaded dataset may be a corner case, which may be a potentially good resource to validate or improve checkers. As the uploaded controversial dataset could be falsely labeled, we need subsequent manual check, or we can observe the rewriting history of checkers to confirm the quality of the uploaded dataset. Specifically, if many checkers are rewritten and change their detection results on the controversial dataset, we can ensure the quality of the dataset. We combine the above approaches to rate the quality of datasets and give the corresponding incentives (e.g., tokens) to dataset uploaders.

Furthermore, we integrate multiple detectors to provide detectors-as-a-service, which allows developers to readily find vulnerabilities/bugs for their submitted contracts or addresses.

[1]. Durieux T, Ferreira J F, Abreu R, et al. Empirical review of automated analysis tools on 47,587 Ethereum smart contracts[C]//Proceedings of the ACM/IEEE 42nd International conference on software engineering. 2020: 530-541.