top of page

Methods

The methods can be divided into three parts: perception (see the blocks), navigation (control base to blocks), and manipulation (see and pick up blocks of specified colors). All codes can be found on our GitHub.

Perception

In the perception part, a ROS2 node 'Matching_Pix_to_Ptcld' is implemented. Within the node two principal functions are executed: cube detection and spatial localization of these cubes. The procedure begins with the identification of cubes by applying HSV color thresholding to the imagery obtained from the camera. It effectively segments the color-specific blocks. This segmentation is further refined through the use of the OpenCV findContours() method that delineates the precise contours of each identified cube. Then it facilitates the determination of each block's centroid. Following the detection phase, it addresses the challenge of localizing these cubes in three-dimensional space. This is achieved by merging the detected centroids with depth information sourced from the same camera. It will yield the exact 3D coordinates relative to the robot's positioning framework. The final step converts the local coordinates of identified cubes into the base frame so that the robot can move to the precise location with our navigation part. 

Navigation

In the navigation part, a ROS2 node named 'LocobotExample' orchestrates a comprehensive robotic sequence combining navigation and object interaction within a structured framework. We first make the robot move to a designated starting point. The odometry feedback and proportional-integral (PI) control are used to accurately correct its position. Following that the robot transfers to the mode that detects blocks. This is achieved by tilting the robot's camera to locate blocks and transitioning through states 'TILTING_CAMERA' and 'WAITING_FOR_BLOCK_POSITION' based on the detected block positions. Once a block is identified and the robot is properly aligned, it proceeds to grasp and relocate the block to a predetermined final position with states 'GRASPING_BLOCK', 'MOVING_TO_FINAL_POSITION', and DROPPING_BLOCK. These actions are delineated through a series of state transitions within the node, enabling the Locobot to autonomously navigate, identify, grasp, and relocate objects in an organized manner.

Manipulation

In this program, we use the MoveIt framework in a ROS2 setting for robotic manipulation. The main focus is on a node that controls the 'interbotix_arm' and 'interbotix_gripper'. Initially, the program awaits a block's position, which should come from a ROS topic. Once received, it calculates the necessary poses for pre-grasp, grasp, and post-grasp, all aimed downward for effective grabbing. The operation starts with the gripper opening and preparing for the task. Then, the arm moves to the pre-grasp pose above the target, avoiding contact. Next, it descends to grasp the block firmly. Following this, the gripper closes to secure the block. Finally, the arm moves to the post-grasp position, lifting the block away. This sequence, managed by the 'pick' function, showcases the integration of planning and action. The MoveIt framework supports the precise execution of each step, illustrating effective robotic handling and manipulation.

bottom of page