Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB
This example shows how to design and validate complex application workflows for a robotic manipulator like the Kinova Gen3 7-DoF Ultralightweight Robot using Gazebo® physics simulator, OpenCV, Robotics System Toolbox™ and ROS Toolbox.
Overview
This example uses functionalities offered by OpenCV to detect the location and orientation of an object from a video stream. This information is then passed to the motion planning algorithm using the ROS network. The motion planning algorithm uses functionalities offered by Robotics System Toolbox to calculate the location of the object in the world frame and commands the simulated robot to move and pick the object.
In this example, the Gazebo physics simulator is used with a simulated robot and an object. All the necessary files required for the ROS setup are included in the virtual machine (VM) available for download.
Clear all output in the live script by right-clicking anywhere in the live script and then clicking ‘Clear All Output’ or navigate to VIEW > Clear all Output.
Note: Execute the script section by section as mentioned in this example.
Required Products
Robotics System Toolbox
Robotics System Toolbox Support Package for KINOVA® Gen3 Manipulators
ROS Toolbox
This example builds on key concepts from the following related examples:
Get Started with Gazebo and a Simulated TurtleBot (ROS Toolbox)
Pick-and-Place Workflow in Gazebo Using ROS (Robotics System Toolbox)
Robot Simulation and Control in Gazebo
Start a ROS-based simulator for a KINOVA Gen3 robot and configure the MATLAB® connection with the robot simulator.
This example uses a virtual machine (VM) that is available for download. To download and install the same, use the link available under Download Virtual Machine
section in Get Started with Gazebo and a Simulated TurtleBot (ROS Toolbox). Follow the below steps after the installation is complete (ignore the other steps after the first step (Download and install the ROS Virtual Machine)
mentioned in Get Started with Gazebo and a Simulated TurtleBot).
After installing the ROS Virtual Machine, launch the Ubuntu® virtual machine desktop.
On the Ubuntu desktop, click the Gazebo Recycling World - Object Detection icon to start the Gazebo world built for this example.
Specify the IP address and port number of the ROS master in Gazebo so that MATLAB can communicate with the robot simulator. For this example, the ROS master in Gazebo uses the IP address of 172.19.98.176 displayed on the Desktop. Adjust the
rosIP
variable based on your VM.Start the ROS 1 network using
rosinit
.
rosIP = '172.19.98.83'; % IP address of ROS-enabled machine rosinit(rosIP,11311); % Initialize ROS connection
Initializing global node /matlab_global_node_65505 with NodeURI http://172.19.98.140:62441/
The simulated world consists of a Kinova Gen3 robot mounted on a table and a rectangular object. The robot also has a simulated camera sensor attached, which transmits a video stream over a ROS network.
To simulate and control the robot arm in Gazebo, the VM contains the ros_kortex ROS package, which is provided by KINOVA. The packages use ros_control to control the joints to desired joint positions.
Object Detection ROS Node
First, unpause the Gazebo simulation engine. This will start the video stream from the simulated camera sensor mounted on the robot.
physicsClient = rossvcclient('gazebo/unpause_physics'); physicsResp = call(physicsClient,'Timeout',3);
The ROS node with object detection logic is placed at /home/user/catkin_ws/src/opencv_node location in the virtual machine. The detect_object.cpp file contains the algorithm to subscribe to the video feed and use OpenCV to detect the rectangular object. The Position and Orientation of the detected object are further transmitted to the existing ROS network via different ROS topics.
The overall algorithm to detect an object's position and orientation is similar to the one explained in . The necessary modifications are done in the algorithm to enable the interface with ROS.
Move the Robot to Initial Position
In the initial robot pose, the object is not entirely visible in the field of view of the camera sensor. The following commands move the simulated robot to a predefined position such that the large area of the table is captured.
%% Move to scan position % Open the gripper CommandActivateGripperROSGazebo('off');
Gripper open...
[trajPub,trajCmd] = rospublisher('/my_gen3/gen3_joint_trajectory_controller/command'); jointWaypoints = [0 0 180 266 0 272 90]*pi/180; % Scan position jointWaypointTimes = 5; reachJointConfiguration(trajPub,trajCmd,jointWaypoints,jointWaypointTimes);
Moving to Position Position Reached
Once the robot reaches the scan position, the detected object can be seen in the two image windows launched by the object detection node. The algorithm superimposes a red rectangle over the detected object so we can verify that the correct object has been identified. You can change the slider in the Binary Image window and observe the effect. This slider is associated with the threshold value of canny edge detector algorithm.
Interact with Object Using Gazebo
To change the position of the object on the table, select the object first and press t. This will enable Translation Mode in Gazebo, and axes marker will be displayed on the object. Interact with this marker to change the position of the object.
Similarly, to change the orientation of the object, press r. This will enable Rotation Mode in Gazebo, and axes marker will be displayed on the object. Interact with this marker to change the orientation of the object.
Use Rest Model Poses option to reset the robot and object position. Do not use Reset World option as it results in loss of communication between ROS master and MATLAB.
Calculate the Location of Object in World Frame
The object detection node provides the location of the object with reference to the captured image. The position coordinates and rotation of the object with respect to the horizontal axis are published to three different ROS topics. The following section of code utilizes the object position information, properties of the simulated camera, and robot's current pose to localize the object in the global world frame.
%% Load Rigid Body Tree Model of Kinova Gen3 with Gripper load('exampleHelperKINOVAGen3GripperROSGazeboManip.mat'); gen3 = robot; %% Retrieve Camera properties jSub = rossubscriber('/my_gen3/camera/camera_info'); jMsg = receive(jSub,10); imageWidth = single(jMsg.Width); imageHeight = single(jMsg.Height); focalLength = single(jMsg.K(1)); %% End Effector endEffectorFrame = "gripper"; %% Read current Joint Angles jSub = rossubscriber('/my_gen3/joint_states'); jMsg = receive(jSub,10); CurrentRobotJConfig = wrapToPi(jMsg.Position(2:8)'); %% Get transformation to end effector cameraTransf = getTransform(gen3, CurrentRobotJConfig, 'EndEffector_Link'); cameraZ = cameraTransf(3,4); zDistance = cameraZ - 0.05590 + 0.10372; % - Width of the object + Width of the stool;
Read the object's position and orientation data.
%% Get center and rotation of the object tmpsub = rossubscriber('/opencv/centerx'); tmp = receive(tmpsub,10); result(1) = tmp.Data; tmpsub = rossubscriber('/opencv/centery'); tmp = receive(tmpsub,10); result(2) = tmp.Data; tmpsub = rossubscriber('/opencv/angle'); tmp = receive(tmpsub,10); result(3) = tmp.Data;
Calculate the absolute position of the object in the world frame
centerPixel = [imageHeight/2, imageWidth/2]; centerBox = [imageHeight-result(1) imageWidth-result(2)]; centerBoxwrtCenterPixel = centerBox - centerPixel; % in pixels worldCenterBoxwrtCenterPixel = (zDistance/focalLength)*centerBoxwrtCenterPixel; % in meters actualCameraTransf = cameraTransf * trvec2tform([0, 0.041, 0.0]); actualpartXY = actualCameraTransf(1:2,4)' + worldCenterBoxwrtCenterPixel; part_centerPoint = [actualpartXY(1),actualpartXY(2),zDistance];
Calculate the Desired Grasping Pose
The following section of code calculates the desired grasping pose to reach the object. Once we have the desired grasping pose, the joint angle configuration for the desired pose is computed using inverse kinematics. The desired grasping pose also considers the rotation of the object to align the gripper with the object.
ik = inverseKinematics('RigidBodyTree',gen3); ik.SolverParameters.AllowRandomRestart = 0; % Calculate Final Pose to grasp the object GraspPose = trvec2tform(part_centerPoint + [-0.008 0 -zDistance-0.055])*eul2tform([deg2rad(result(3)) pi 0]);
Move to Position 1
The robot is first commanded to move 15 cm directly above the object. In the process, the gripper is also aligned with the object.
% Calculate first position Grasppose - 0.15 in Z axes taskFinal = double(GraspPose*trvec2tform([0,0,-0.15])); jointWaypoints1 = ik(endEffectorFrame,taskFinal,[1 1 1 1 1 1],CurrentRobotJConfig); jointWaypointTimes = 5; %% Go to Position 1 reachJointConfiguration(trajPub,trajCmd,jointWaypoints1,jointWaypointTimes);
Moving to Position Position Reached
Move to Position 2
Now the robot is commanded to reach the grasping pose. At the end of this motion, the object will be inside the grasping range of the gripper.
% Read current Joint Angles jSub = rossubscriber('/my_gen3/joint_states'); jMsg = receive(jSub,10); CurrentRobotJConfig = wrapToPi(jMsg.Position(2:8)'); jointWaypoints2 = ik(endEffectorFrame,double(GraspPose),[1 1 1 1 1 1],CurrentRobotJConfig); jointWaypointTimes = 3; reachJointConfiguration(trajPub,trajCmd,jointWaypoints2,jointWaypointTimes);
Moving to Position Position Reached
pause(1);
Grab the Object Using Gripper
%% Close the gripper CommandActivateGripperROSGazebo('on');
Gripper closed...
pause(3);
Move Back to Position 1
jointWaypointTimes = 5; reachJointConfiguration(trajPub,trajCmd,jointWaypoints1,jointWaypointTimes);
Moving to Position Position Reached
Shutdown the MATLAB Global Node and Clear Workspace
rosshutdown;
Shutting down global node /matlab_global_node_65505 with NodeURI http://172.19.98.140:62441/
clear;
Helper Function to Control Joint Angles of the Robot and Activate the Gripper
The following function utilizes the ros_kortex ROS package from Kinova to control the robot.
function reachJointConfiguration(trajPub,trajCmd,jointWaypoints,jointWaypointTimes) trajCmd.JointNames = {'joint_1','joint_2','joint_3','joint_4', ... 'joint_5','joint_6','joint_7'}; jointWaypoints = wrapToPi(jointWaypoints); trajCmd.Points = rosmessage('trajectory_msgs/JointTrajectoryPoint'); trajCmd.Points.TimeFromStart = rosduration(jointWaypointTimes); trajCmd.Points.Positions = jointWaypoints; trajCmd.Points.Velocities = zeros(size(jointWaypoints)); trajCmd.Points.Accelerations = zeros(size(jointWaypoints)); trajCmd.Points.Effort = zeros(size(jointWaypoints)); send(trajPub,trajCmd); pause(1); disp('Moving to Position'); jSub = rossubscriber('/my_gen3/joint_states'); err = 1; while ( err > 1e-2) jMsg = receive(jSub); err = max(abs(jMsg.Velocity(2:8))); end disp('Position Reached'); end
Helper Function to Activate the Gripper
The following function utilizes the ros_kortex ROS package from Kinova to activate the gripper by sending a ROS action command.
function CommandActivateGripperROSGazebo(state)% This class is for internal use and may be removed in a future release if strcmp(state,'on') == 1 % Activate gripper [gripAct,gripGoal] = rosactionclient('/my_gen3/custom_gripper_controller/gripper_cmd'); gripperCommand = rosmessage('control_msgs/GripperCommand'); gripperCommand.Position = 0.1; % 0.1 fully closed, 0 fully open gripperCommand.MaxEffort = 500; gripGoal.Command = gripperCommand; pause(1); % Send command sendGoal(gripAct,gripGoal); disp('Gripper closed...'); else % Deactivate gripper [gripAct,gripGoal] = rosactionclient('/my_gen3/custom_gripper_controller/gripper_cmd'); gripperCommand = rosmessage('control_msgs/GripperCommand'); gripperCommand.Position = 0.0; % 0.1 fully closed, 0 fully open gripperCommand.MaxEffort = 500; gripGoal.Command = gripperCommand; pause(1); % Send command sendGoal(gripAct,gripGoal); disp('Gripper open...'); end pause(2); end