Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Learning an event sequence embedding for event-based deep stereo
 
conference paper

Learning an event sequence embedding for event-based deep stereo

Tulyakov, Stepan
•
Fleuret, Francois
•
Kiefel, Martin
Show more
2019
2019 Ieee/Cvf International Conference On Computer Vision (Iccv 2019)
IEEE/CVF International Conference on Computer Vision (ICCV)

Today, a frame-based camera is the sensor of choice for machine vision applications. However, these cameras, originally developed for acquisition of static images rather than for sensing of dynamic uncontrolled visual environments, suffer from high power consumption, data rate, latency and low dynamic range. An event-based image sensor addresses these drawbacks by mimicking a biological retina. Instead of measuring the intensity of every pixel in a fixed time-interval, it reports events of significant pixel intensity changes. Every such event is represented by its position, sign of change, and timestamp, accurate to the microsecond. Asynchronous event sequences require special handling, since traditional algorithms work only with synchronous, spatially gridded data. To address this problem we introduce a new module for event sequence embedding, for use in difference applications. The module builds a representation of an event sequence by firstly aggregating information locally across time, using a novel fully-connected layer for an irregularly sampled continuous domain, and then across discrete spatial domain. Based on this module, we design a deep learning-based stereo method for event-based cameras. The proposed method is the first learning-based stereo method for an event-based camera and the only method that produces dense results. We show that large performance increases on the Multi Vehicle Stereo Event Camera Dataset (MVSEC), which became the standard set for benchmarking of event-based stereo methods.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/ICCV.2019.00161
Author(s)
Tulyakov, Stepan
Fleuret, Francois
Kiefel, Martin
Gehler, Peter
Hirsch, Michael
Date Issued

2019

Publisher

IEEE

Published in
2019 Ieee/Cvf International Conference On Computer Vision (Iccv 2019)
ISBN of the book

978-1-7281-4803-8

Start page

1527

End page

1537

URL
http://openaccess.thecvf.com/content_ICCV_2019/html/Tulyakov_Learning_an_Event_Sequence_Embedding_for_Dense_Event-Based_Deep_Stereo_ICCV_2019_paper.html
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIDIAP  
Event nameEvent placeEvent date
IEEE/CVF International Conference on Computer Vision (ICCV)

Seoul, SOUTH KOREA

Oct 27-Nov 02, 2019

Available on Infoscience
February 18, 2020
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/166359
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés