Main Content

Object Detection and Motion Planning Application with Onboard Deployment

With Robotics System Toolbox™ Support Package for Manipulators, you can also eliminate need of a continuous communication link between the robot and a generic host computer (with MATLAB®). This functionality is helpful for applications where the manipulator robot is mounted over a mobile platform or robot. Additionally, when compared to custom compute boards, generic host computers can be less efficient for certain tasks like perception, path planning, and Artificial Intelligence (AI).

Based on the control interface provided by the robot manufacturer, you can choose between a MATLAB only workflow or MATLAB and ROS workflow for the design, simulation, development, and verification.

The code generation and deployment require ROS Toolbox to be installed along with Robotics System Toolbox Support Package for Manipulators.

Object Detection Using OpenCV

You can create the core logic for object detection using OpenCV. Based on the preferred workflow, that is, MATLAB only or MATLAB and ROS, the core logic can be reused, once you achieve satisfactory performance from the algorithm.

If the MATLAB only workflow is preferred, then Computer Vision Toolbox Interface for OpenCV in MATLAB can be used to convert the OpenCV logic into a MATLAB MEX function. This helps you to call the MEX function for object detection directly from the MATLAB script.

If the MATLAB and ROS workflow is preferred, then cv_bridge ROS package can be used to convert the OpenCV logic into a standalone ROS node. The node can subscribe to the video feed from the camera sensor and publish position and orientation of the detected object. This enables you to focus on the motion planning and localization algorithm while directly subscribing to the ROS topics published by the object detection node.

In the two workflows described in this section, a tight integration with OpenCV represents usage of any external functionalities or tools. Many applications require an integration of various functionalities offered by MATLAB and external software packages, and these workflows demonstrate a design process which involves such use cases.

A typical algorithm that you can use to detect position and orientation of a rectangular object is explained in Detect Position and Orientation of a Rectangular Object Using OpenCV.

MATLAB Only Workflow

For the design process using MATLAB only workflow, you start the development by utilizing functionalities offered by the Robotics System Toolbox. The Rigid body tree robot model can be used to simulate a robot at a particular joint configuration. Further, motion planning features can be utilized to create a trajectory to pick the object.

As mentioned in the earlier section, you can reuse the object detection algorithm developed in OpenCV and convert it to a MEX function using Computer Vision Toolbox Interface for OpenCV in MATLAB. Finally, robot visualization features can be used to observe and validate the robot motion. For more details on this workflow, refer to the example, Simulate a Detect and Pick Algorithm Using OpenCV Interface and Rigid Body Tree Robot Model.

Once the desired performance is achieved and validated using simulation, various control interfaces provided by the robot manufacturers can be leveraged to control the robot. KINOVA Robotics provides the MATLAB APIs to control their Gen3 robot. For more information on how to proceed from simulation to controlling the actual hardware, refer to Track Pre-Computed Trajectory of Kinova Gen3 Robot End-Effector Using Inverse Kinematics and KINOVA KORTEX MATLAB API.

MATLAB and ROS Workflow

For the design process using MATLAB and ROS workflow, you start the development by utilizing functionalities offered by the Robotics System Toolbox, ROS Toolbox and Gazebo physics simulator. The simulated world can be created in Gazebo, which consists of an object and the robot. A simulated vision sensor can be mounted on the robot and the video feed from the camera sensor can be published over a ROS network.

As stated in the previous section, cv_bridge ROS package can be used to convert the OpenCV logic into a standalone ROS node. The motion planning algorithm is developed using Robotics System Toolbox features, and ROS Toolbox functionalities are used to communicate with the ROS master. For more details, refer to the example, Design and Validate Object Detection and Motion Planning Algorithms Using Gazebo, OpenCV, and MATLAB.

Once the desired performance is achieved and validated using simulation, ROS driver interfaces provided by the robot manufacturers can be leveraged to control the robot. KINOVA Robotics provides ROS driver packages to control their Gen3 robot via ROS.

ROS Toolbox allows you to generate a standalone ROS node from a Simulink® model, which can be directly deployed to various boards. Read Current Joint Angle of KINOVA Gen3 and Publish to ROS Network provides a basic example of such a workflow. For more information on how to proceed from simulation to controlling the actual hardware and deployment, refer to Detect and Pick Object Using KINOVA Gen3 Robot Arm with Joint Angle Control and Trajectory Control example.

Related Topics