Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Magma: A Ground-Truth Fuzzing Benchmark
 
research article

Magma: A Ground-Truth Fuzzing Benchmark

Hazimeh, Ahmad  
•
Herrera, Adrian
•
Payer, Mathias  
December 1, 2020
Proceedings Of The ACM On Measurement And Analysis Of Computing Systems

High scalability and low running costs have made fuzz testing the de facto standard for discovering software bugs. Fuzzing techniques are constantly being improved in a race to build the ultimate bug-finding tool. However, while fuzzing excels at finding bugs in the wild, evaluating and comparing fuzzer performance is challenging due to the lack of metrics and benchmarks. For example, crash count-perhaps the most commonly-used performance metric-is inaccurate due to imperfections in deduplication techniques. Additionally, the lack of a unified set of targets results in ad hoc evaluations that hinder fair comparison. We tackle these problems by developing Magma, a ground-truth fuzzing benchmark that enables uniform fuzzer evaluation and comparison. By introducing real bugs into real software, Magma allows for the realistic evaluation of fuzzers against a broad set of targets. By instrumenting these bugs, Magma also enables the collection of bug-centric performance metrics independent of the fuzzer. Magma is an open benchmark consisting of seven targets that perform a variety of input manipulations and complex computations, presenting a challenge to state-of-the-art fuzzers. We evaluate seven widely-used mutation-based fuzzers (AFL, AFLFast, AFL++, FairFuzz, MOpt-AFL, honggfuzz, and SymCC-AFL) against Magma over 200,000 CPU-hours. Based on the number of bugs reached, triggered, and detected, we draw conclusions about the fuzzers' exploration and detection capabilities. This provides insight into fuzzer performance evaluation, highlighting the importance of ground truth in performing more accurate and meaningful evaluations.

  • Details
  • Metrics
Type
research article
DOI
10.1145/3428334
Web of Science ID

WOS:000834020900009

Author(s)
Hazimeh, Ahmad  

École Polytechnique Fédérale de Lausanne

Herrera, Adrian

ANU & DST

Payer, Mathias  

École Polytechnique Fédérale de Lausanne

Date Issued

2020-12-01

Publisher

ASSOC COMPUTING MACHINERY

Published in
Proceedings Of The ACM On Measurement And Analysis Of Computing Systems
Volume

4

Issue

3

Article Number

49

Subjects

fuzzing

•

benchmark

•

software security

•

performance evaluation

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
HEXHIVE  
FunderFunding(s)Grant NumberGrant URL

European Research Council (ERC)

850868

European Research Council (ERC)

850868

Available on Infoscience
April 4, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/248562
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés