Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Stealing Machine Learning Models via Prediction APIs
 
conference paper

Stealing Machine Learning Models via Prediction APIs

Tramer, Florian
•
Zhang, Fan
•
Juels, Ari
Show more
2016
Proceedings Of The 25Th Usenix Security Symposium
25th USENIX Security Symposium

Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service ("predictive analytics") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., "steal") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.

  • Details
  • Metrics
Type
conference paper
Web of Science ID

WOS:000385263000036

Author(s)
Tramer, Florian
Zhang, Fan
Juels, Ari
Reiter, Michael K.
Ristenpart, Thomas
Date Issued

2016

Publisher

Usenix Assoc

Publisher place

Berkeley

Published in
Proceedings Of The 25Th Usenix Security Symposium
ISBN of the book

978-1-931971-32-4

Total of pages

18

Start page

601

End page

618

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LDS  
Event nameEvent placeEvent date
25th USENIX Security Symposium

Austin, TX

AUG 10-12, 2016

Available on Infoscience
November 21, 2016
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/131310
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés