Abstract

In many cases, we can only have access to a service by proving we are sufficiently close to a particular location (e.g. in automobile or building access control). In these cases, proximity can be guaranteed through signal attenuation. However, by using additional transmitters an attacker can relay signals between the prover and the verifier. Distance-bounding protocols are the main countermeasure against such attacks; however, such protocols may leak information regarding the location of the prover and/or the verifier who run the distance-bounding protocol. In this paper, we consider a formal model for location privacy in the context of distance-bounding. In particular, our contributions are threefold: we first define a security game for location privacy in distance bounding; secondly, we define an adversarial model for this game, with two adversary classes; finally, we assess the feasibility of attaining location privacy for distance-bounding protocols. Concretely, we prove that for protocols with a beginning or a termination, it is theoretically impossible to achieve location privacy for either of the two adversary classes, in the sense that there always exists a polynomially-bounded adversary winning the security game. However, for so-called limited adversaries, who cannot see the location of arbitrary provers, carefully chosen parameters do, in practice, enable computational location privacy. (C) 2014 Elsevier Ltd. All rights reserved.

Details

Actions