COLLABORATIVE ROBOTICS - STANFORD UNIVERSITY - 2024 - TEAM 4
We successfully implemented the navigation and perception part on the robot. We divided the demonstration into 6 states. In State 1, the robot navigates to the target position where the blocks are, utilizing its navigation and perception capabilities. Once it reaches the designated area, it reaches State 2. The robot tilts its camera downwards to obtain a clear view of the blocks below. Moving into State 3, the robot employs its perception abilities to accurately determine the position of the block that matches the target color. Following this, in State 4, the robot initiates the manipulation algorithm to grasp the identified block securely. However, we were unable to grasp the block in the simulation due to some Gazebo issues. State 5 involves the robot transporting the captured block to the predetermined final position (-1, -1) using its navigation system. Finally, in State 6, the robot completes its task by dropping the block in the same relative position as it was originally found, and then tilts the camera back to its original position, concluding the manipulation process.
Here are some video demonstrations: