Rios, MarcoPonzina, FlavioLevisse, Alexandre Sébastien JulienAnsaloni, GiovanniAtienza Alonso, David2023-01-132023-01-132023-01-13202310.1109/TETC.2023.3237914https://infoscience.epfl.ch/handle/20.500.14299/193688By supporting the access of multiple memory words at the same time, Bit-line Computing (BC) architectures allow the parallel execution of bit-wise operations in-memory. At the array periphery, arithmetic operations are then derived with little additional overhead. Such a paradigm opens novel opportunities for Artificial Intelligence (AI) at the edge, thanks to the massive parallelism inherent in memory arrays and the extreme energy efficiency of computing in-situ, hence avoiding data transfers. Previous works have shown that BC brings disruptive efficiency gains when targeting AI workloads, a key metric in the context of emerging edge AI scenarios. This manuscript builds on these findings by proposing an end-to-end framework that leverages BC-specific optimizations to enable high parallelism and aggressive compression of AI models. Our approach is supported by a novel hardware module performing real-time decoding, as well as new algorithms to enable BC-friendly model compression. Our hardware/software approach results in a 91% energy savings (for a 1% accuracy degradation constraint) regarding state-of-the-art BC computing approaches.Edge Artificial IntelligenceIn-Memory ComputingHardware/Software Co-DesignConvolutional Neural NetworksLow-power Software OptimizationBit-Line Computing for CNN Accelerators Co-Design in Edge AI Inferencetext::journal::journal article::research article