A Neuro-Inspired Computational Model for a Visually Guided Robotic Lamprey Using Frame and Event Based Cameras
The computational load associated with computer vision is often prohibitive, and limits the capacity for on-board image analysis in compact mobile robots. Replicating the kind of feature detection and neural processing that animals excel at remains a challenge in most biomimetic aquatic robots. Event-driven sensors use a biologically inspired sensing strategy to eliminate the need for complete frame capture. Systems employing event-driven cameras enjoy reduced latencies, power consumption, bandwidth, and benefit from a large dynamic range. However, to the best of our knowledge, no work has been done to evaluate the performance of these devices in underwater robotics. This work proposes a robotic lamprey design capable of supporting computer vision, and uses this system to validate a computational neuron model for driving anguilliform swimming. The robot is equipped with two different types of cameras: frame-based and event-based cameras. These were used to stimulate the neural network, yielding goal-oriented swimming. Finally, a study is conducted comparing the performance of the computational model when driven by the two different types of camera. It was observed that event-based cameras improved the accuracy of swimming trajectories and led to significant improvements in the rate at which visual inputs were processed by the network.
youssef2020.pdf
Preprint
http://purl.org/coar/version/c_71e4c1898caa6e32
openaccess
1.53 MB
Adobe PDF
2f57d9ffe51fa4f090a95fd3e2146d38