Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Magma: A Ground-Truth Fuzzing Benchmark
 
conference paper

Magma: A Ground-Truth Fuzzing Benchmark

Hazimeh, Ahmad  
•
Herrera, Adrian
•
Payer, Mathias  
May 31, 2021
SIGMETRICS 2021 - Abstract Proceedings of the 2021 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems
ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems

High scalability and low running costs have made fuzz testing the de facto standard for discovering software bugs. Fuzzing techniques are constantly being improved in a race to build the ultimate bug-finding tool. However, while fuzzing excels at finding bugs in the wild, evaluating and comparing fuzzer performance is challenging due to the lack of metrics and benchmarks. For example, crash count-perhaps the most commonly-used performance metric-is inaccurate due to imperfections in deduplication techniques. Additionally, the lack of a unified set of targets results in ad hoc evaluations that hinder fair comparison. We tackle these problems by developing Magma, a ground-truth fuzzing benchmark that enables uniform fuzzer evaluation and comparison. By introducing real bugs into real software, Magma allows for the realistic evaluation of fuzzers against a broad set of targets. By instrumenting these bugs, Magma also enables the collection of bug-centric performance metrics independent of the fuzzer. Magma is an open benchmark consisting of seven targets that perform a variety of input manipulations and complex computations, presenting a challenge to state-of-the-art fuzzers. We evaluate seven widely-used mutation-based fuzzers (AFL, AFLFast, AFL++, FairFuzz, MOpt-AFL, honggfuzz, and SymCC-AFL) against Magma over 200,000 CPU-hours. Based on the number of bugs reached, triggered, and detected, we draw conclusions about the fuzzers' exploration and detection capabilities. This provides insight into fuzzer performance evaluation, highlighting the importance of ground truth in performing more accurate and meaningful evaluations.

  • Details
  • Metrics
Type
conference paper
DOI
10.1145/3410220.3456276
Scopus ID

2-s2.0-85108546893

Author(s)
Hazimeh, Ahmad  

École Polytechnique Fédérale de Lausanne

Herrera, Adrian

The Australian National University

Payer, Mathias  

École Polytechnique Fédérale de Lausanne

Date Issued

2021-05-31

Publisher

Association for Computing Machinery, Inc

Published in
SIGMETRICS 2021 - Abstract Proceedings of the 2021 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems
ISBN of the book

9781450380720

Start page

81

End page

82

Subjects

benchmark

•

fuzzing

•

performance evaluation

•

software security

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
HEXHIVE  
Event nameEvent acronymEvent placeEvent date
ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems

Virtual. Online, China

2021-06-14 - 2021-06-18

Available on Infoscience
April 4, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/248660
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés