SafeAMC: Adversarial training for robust modulation classification models
In communication systems, there are many tasks, like modulation classification, for which Deep Neural Networks (DNNs) have obtained promising performance. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also about the general trust in model predictions. We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation classification (AMC) models. We show that current state-of-the-art models can effectively benefit from adversarial training, which mitigates the robustness issues for some families of modulations. We use adversarial perturbations to visualize the learned features, and we found that the signal symbols are shifted towards the nearest classes in constellation space, like maximum likelihood methods when adversarial training is enabled. This confirms that robust models are not only more secure, but also more interpretable, building their decisions on signal statistics that are actually relevant to modulation classification.
WOS:000918827600321
2022-01-01
978-90-827970-9-1
New York
European Signal Processing Conference
1636
1640
REVIEWED
Event name | Event place | Event date |
Belgrade, SERBIA | Aug 29-Sep 02, 2022 | |