NODE aruco_detection.cpp

Note

see also the second step of the markers detection, NODE aruco_decode.py

arUco markers detection on 4 cameras and images display

This node implements two important functionalities: it displays what the cameras are seeing in a window, changing the video stream every 8 seconds; and, second, the most important thing, the node applies the detection of the ArUco markers distributed on the four cameras of the robot.

Authors

Francesco Ganci

Version

v1.0

Todo:

it could be a good idea to expose the update time of the image to the parameter server

Todo:

the program should decide (via parameter server) if to visualize or not the echo from the cameras.

Todo:

the user could also decide how many windows to open. Let’s think for instance on a scenario in which the user wants to see two outputs at the same time.

Todo:

open a service allowing other nodes to read the entire set of IDs detected.

Todo:

for each marker detected, try a way for also saving the first frames where them have been detected.

Todo:

enable the support for the develop mode (it is missing so far)

Defines

NODE_NAME
SS(chr)
SSS(el)
UPDATE_TIME

the video stream changes every UPDATE_TIME seconds

MAX_SUB_RATE

the maximum rate for the arUco detection

ID_CAMERA_ARM

arm camera identifier

TOPIC_CAMERA_ARM

arm camera source topic

ID_CAMERA_FRONT

front camera identifier

TOPIC_CAMERA_FRONT

front camera source topic

ID_CAMERA_LEFT

left camera identifier

TOPIC_CAMERA_LEFT

left camera source topic

ID_CAMERA_RIGHT

right camera identifier

TOPIC_CAMERA_RIGHT

right camera source topic

TOPIC_DETECTED_IDS

topic used for communicating the IDs when detected

Functions

void shut_msg(int sig)

shudown message

int main(int argc, char **argv)

main function

Variables

static const std::string OPENCV_WINDOW = "RoboCLuedo Cameras"

‘echo’ windows title

class node_aruco_detection

node implementation (as class)

Public Functions

inline node_aruco_detection()

class constructor

the constructor opens all the subscriptions with the cameras, and instanciates the publisher that will transmit the new IDs when identified.

done that, the constructor opens the window.

Todo:

let the user decide whether to open the window and when to close it. It could be a good idea to create a server for switching the window.

Note

the class uses the ImageTransport which enables the class to use a compressed way to read the images.

inline ~node_aruco_detection()

class destructor

Todo:

see the constructor: it could be a good idea to provide the “user” the control on the window.

inline void spin()

working cycle of the node

This working cycle is executed in concurrency with the receiving and the detection from the cameras signals. Its main purpose is to detect when the set of IDs has changed size (i.e. when there are new IDs to send) and in that case, to publish them.

This cycle also changes the video stream each UPDATE_TIME seconds.

Attention

Very important to point out that the node sends only the new IDs without repetitions. The node could detect the same marker a number of times, but only one message is sent, corresponding to the first succeeded detection. This is another trick to not waste time: it is important to keep this node as much efficient as possible.

inline void cbk_cam_arm(const sensor_msgs::ImageConstPtr &msg)

subscription to the camera topic

The node has four subscription callbacks, one for each camera image topic, each one with pretty much the same structure: read the image, try to perform the detection, try to send the output to the window, and finally wait a bit.

In particular, the last waiting ensures that the update rate of the camera isn’t going to be not so high. See MAX_SUB_RATE, which is the maximum frequency the subscriber can read the images. See also the class constructor, which ggives queue size 1 to all the topics: this causes a image drop if the rate from the cameras is too much high.

There are other workarounds for overcome the performance issue, see the method for the detection. The point is that the arUco detection process is quite heavy to carry out: if the cameras would run the detection at each frame with a too much high frequency, the application would perform poorly.

Parameters

msg – the message from the camera

inline void cbk_cam_front(const sensor_msgs::ImageConstPtr &msg)

sub front camera callback

See also

cbk_cam_arm

Parameters

msg – the message from the camera

inline void cbk_cam_left(const sensor_msgs::ImageConstPtr &msg)

sub left camera callback

See also

cbk_cam_arm

Parameters

msg – the message from the camera

inline void cbk_cam_right(const sensor_msgs::ImageConstPtr &msg)

sub right camera callback

See also

cbk_cam_arm

Parameters

msg – the message from the camera

inline void aruco_detect(int camera_id, cv::Mat &image)

perform the ArUco detection

This function enables to use the arUco detector for detecting new markers in one camera frame.

Since this is a very time-consuming operation, in order to not call this function with a prohibitive frequency, a number of workarounds have been put in action.

First of all, the function can be called by one thread each time. There’s a semaphore to be acquired when the function is called, and released after the detection has been completed.

Second, the ‘wait’ of the semaphore is not a lock, but a try_lock, meaning that if the thread can’t use the callback, the functon returns immediately and the frame dropped. The main assumption, satisfied in most of the cases, is that the frame of the cameras is high enough. Moreover, in this case the robot doesn’t move at a high velocity, hence the approach works well.

Note

the two assumptions are: the frequency the robot acquires new images is high enough; and the robot doesn’t move too much fast.

Parameters
  • camera_id – the ID of the camera requiring the detection

  • image – the image read from the camera stream (one frame)

inline void show_camera(int camera_id, cv::Mat &image)

update the image in the window

The function shows a new frame into the window. It also applies a writing pointing out which camera the user is looking currently.

Note

if the camera_id doesn’t coincide with the one set by the working cycle, the function returns immediately without updating the window.

Parameters
  • camera_id – the ID of the camera requiring the detection

  • image – the image read from the camera stream (one frame)

Private Functions

inline std::string set2string(std::set<int> &st)

convert a set into a string

inline std::vector<int> set2vector(std::set<int> &st)

convert a set into a dynamic array

inline bool t0_greater_than_t1(ros::WallTime t0, ros::WallTime t1)

comparison between two time instants: t0 >= t1?

Private Members

ros::NodeHandle nh

ROS node handle.

std::mutex mtx_detect

control semaphore for detection

std::mutex mtx_show

control semaphore for image output

aruco::CameraParameters camParam

arUco camera parameters

image_transport::ImageTransport it

image transport object

image_transport::Subscriber image_sub_arm

input img subscriber : arm

image_transport::Subscriber image_sub_front

input img subscriber : front

image_transport::Subscriber image_sub_left

input img subscriber : left

image_transport::Subscriber image_sub_right

input img subscriber : right

aruco::MarkerDetector mDetector

arUco detector

std::set<int> ids

the set of the found IDs

std::vector<int> ids_to_publish

the list of IDs to publish

int last_id_published = -1

the last index sent in the vector

bool useCamInfo = true

use the camera parameters

int current_camera_id = 0

current camera showing the image on the window

ros::Publisher pub_marker_ids

marker ids publisher

ros::Rate max_sub_rate

max rate of the subscribers