State machine replication is becoming an increasingly popular technique among online services to ensure fault-tolerance using commodity hardware. This has led to a renewed interest in its throughput, as these services have typically a large number of users. Recent work has shown how to improve the throughput of the replication protocol, using techniques like Ring-topologies, IP multicast, and rotating leaders. When deployed in modern fast networks, the resulting systems achieve unprecedented levels of throughput. But these systems are increasingly becoming limited by the CPU of the replicas, especially with small client requests. The problem is not lack of performance of the CPUs, but instead the inability of typical implementations to effectively use the multiple cores of modern multi-core CPUs. In this work, we show how to architect a replicated state machine whose performance scales with the number of cores in the nodes. We do so by applying several good practices of concurrent programming to the specific case of state machine replication, including staged execution, workload partitioning, actors, and non-blocking data structures. We describe and test a Java prototype of our architecture, based on the Paxos protocol. With a workload consisting of small requests, we achieve a 6 times improvement in throughput using 8 cores. More generally, in all our experiments we have consistently reached the limits of network subsystem by using up to 12 cores, and do not observe any degradation when using up to 24 cores. Furthermore, the profiling results of our implementation show that even at peak throughput contention between threads is minimal, suggesting that the throughput would continue scaling given a faster network.