Transactional memory (TM) is a promising paradigm for concurrent programming, in which threads of an application communicate, and synchronize their actions, via inmemory transactions. Each transaction can perform any number of operations on shared data, and then either commit or abort. When the transaction commits, the effects of all its operations become immediately visible to other transactions; when it aborts, however, those effects are entirely discarded. Transactions are atomic: programmers get the illusion that every transaction executes all its operations instantaneously, at some single and unique point in time. The TM paradigm has raised a lot of hope for mastering the complexity of concurrent programming. The aim is to provide programmers with an abstraction, i.e., the transaction, that makes handling concurrency as easy as with coarse-grained locking, while exploiting the parallelism of the underlying multi-core or multi-processor hardware as well as hand-crafted fine-grained locking (which is typically an engineering challenge). It is thus not surprising to see a large body of work devoted to implementing this paradigm efficiently, and integrating it with common programming languages. Very little work, however, was devoted to the underlying theory and principles. The aim of this thesis is to provide theoretical foundations for transactional memory. This includes defining a model of a TM, as well as answering precisely when a TM implementation is correct, what kind of properties it can ensure, what the power and limitations of a TM are, and what inherent trade-offs are involved in designing a TM algorithm. In particular, this manuscript contains precise definitions of properties that capture the safety and progress semantics of TMs, as well as several fundamental results related to computability and complexity of TM implementations. While the focus of the thesis is on theory, its goal is to capture the common intuition behind the semantics of TMs and the properties of existing TM implementations.