Introduction For computer vision algorithms that are deployed in the real world such as a robot that is moving through the environment, high processing speeds are essential to provide safe and efficient operation. As sequential processing has reached its limits the world has moved to multi core systems, with GPUs that run 1000s of operations in parallel on the extreme side.
While in computer vision we often have natually parallizable problems, such as the same operation applied to all pixels, implementing the same for different computing architectures poses a challenge.
Abstract With the OAKD-Lite an affordable stereo camera is available that can be used as a starting point for robotic projects. While the accompanying software stack DepthAI is powerful, the onboard resources of the camera are limited and the 3D reconstruction results are often not satisfying. On the other hand we have powerful depth estimation algorithms such as deep stereo networks readily available1. As the camera also comes along with ROS packages and interfaces, nothing is stopping us from extending the hardware of the camera with a more powerful GPU and deploying a method like the RAFT-Stereo network2.
Abstract This post describes a lightweight approach to implement a controlled replayer in ROS2. All source code is available here.
The problem: Unsynchronized replay Anyone who has ever seriously developed in ROS2 will know the issue: We would like to debug our pipeline or improve tiny parts of the algorithm but as we are repeatedly running the pipeline we are getting different results. If we are in a early phase of development and our pipeline is not running fast enough yet to process all data in real time, we end up loosing messages.