A learning-based lossless event data compression for computer vision applications
Event-based computer vision is becoming very popular. With progress in sensing events, the volume of data produced has increased manyfold, and there is a need for compression. This paper introduces a novel deep-learning-based lossless event data compression codec. The idea is to represent the events as a point cloud with spatial dimensions x and y and temporal dimension t as its coordinates. Then, an adaptive octree structure is created to better compact the latter without introducing any loss by coding the occupancy map. The binary representation of the octree structure, which corresponds to a denser representation of the event data, is then entropy-coded with a learning-based model. The latter is based on using a deep neural network to obtain the probability model of a hyperprior-based arithmetic coder. The proposed hyperprior network architecture includes two neural networks following an auto-encoder structure, which allows the capture of the source statistics effectively.
Instituto de Telecomunicações
Instituto de Telecomunicações
Instituto de Telecomunicações
EPFL
2025-09-17
Proceedings of SPIE; 13605
33
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
San Diego, United States | 2025-08-03 - 2025-08-08 | ||