Files

Abstract

Shared L1 memories are of interest for tightly- coupled processor clusters in programmable accelerators as they provide a convenient shared memory abstraction while avoiding cache coherence overheads. The performance of a shared-L1 memory critically depends on the architecture of the low-latency interconnect between processors and memory banks, which needs to provide ultra-fast access to the largest possible L1 working set. The advent of 3D technology provides new opportunities to improve the interconnect delay and the form factor. In this paper we propose a network architecture, 3D-LIN, based on 3D integration technology. The network can be configured based on user specifications and technology constraints to provide fast access to L1 memories on multiple stacked dies. The extracted results from the physical synthesis of 3D-LIN permit to explore trade-offs between memory size and network latency from a planar design to multiple memory layers stacked on top of logic. In the case where the system memory requirements lead to a memory area that occupies 60% of the chip, the form factor can be reduced by more than 60% by stacking 2 memory layers on the logic. Latency reduction is also promising: the network itself, configured for connecting 16 processing elements to 128 memory banks on 2 memory layers is 24% faster than the planar system.

Details

Actions

Preview