Modern large-scale data platforms manage colossal amount of data, generated by the ever-increasing number of concurrent users. Geo-replicated and sharded key-value data stores play a central role when building such platforms. As the strongest consistency model proven not to compromise availability, causal consistency (CC) is perfectly positioned to address the needs of such large-scale systems, while preventing some ordering anomalies. Transactional Causal Consistency (TCC) augments CC by providing richer transactional semantics, simplifying the development of distributed applications. However, achieving CC/TCC in an efficient manner is very challenging. In this thesis we introduce several protocols and designs for high performance causally consistent geo-replicated key-value data stores in several different settings. First, we present a new approach to implementing CC in geo-replicated data stores, called Optimistic Causal Consistency (OCC). By introducing a technique that we call client-assisted lazy dependency resolution, OCC makes it possible for updates replicated to a remote data center to become visible immediately, without checking if their causal dependencies have been received. We further propose a recovery mechanism that allows an OCC system to fall back on a pessimistic protocol to continue operating during network partitions. We show that OCC improves data freshness, while offering performance that is comparable or better than its pessimistic counterpart. Next, we address the problem of providing low-latency TCC reads under full replication. We present Wren, the first TCC system that at the same time achieves low latency by implementing nonblocking read operations and efficiently scales out by sharding. Wren introduces new protocols for transaction execution, dependency tracking and stabilization. The transaction protocol supports nonblocking reads by providing a transaction snapshot as a union of a fresh causal snapshot installed by every partition in the local data center and a client-side cache for writes that are not yet included in the snapshot. The dependency tracking and stabilization protocols require only two scalar timestamps, resulting in efficient resource utilization and providing scalability in terms of shards and replication sites, under a full replication setting. In return for these benefits, Wren slightly increases the visibility latency of updates. Finally, we present PaRiS, the first TCC system that supports partial replication and provides low latency by implementing non-blocking parallel read operations. PaRiS relies on a novel protocol to track dependencies, called Universal Stable Time (UST). By means of a lightweight background gossip process, UST identifies a snapshot of the data that has been installed by every data center in the system. PaRiS equips clients with a private cache, in which they store their own updates that are not yet reflected in the snapshot. The combination of the UST-defined snapshot with client-side cache enables interactive transactions that can consistently read from any replication site without blocking. Moreover, PaRiS requires only one timestamp to track dependencies and define transactional snapshots, thereby achieving resource efficiency and scalability in terms of shards and replication sites, in a partial replication setting.