Robot Mapping using Laser and Kinect-like Sensor

  Переглядів 61,820

matlabbe

matlabbe

9 років тому

Comparison between real and virtual 3rd person views of a robot mapping an environment using RTAB-Map. Five books are also detected using Find-Object during the experiment.
The code is publicly available on introlab.github.io/rtabmap.
For objects recognition, see introlab.github.io/find-object/ .

КОМЕНТАРІ: 47
@eliaskouakou7051
@eliaskouakou7051 2 роки тому
To think was made a couple of years ago. Big up to you.
@user-ps1ug9fh3w
@user-ps1ug9fh3w 7 років тому
nice job! could u tell me ,how u do the navigation?
@antonisvenianakis1047
@antonisvenianakis1047 2 роки тому
Very interesting, thank you!
@isaiabinadabrochasegura5972
@isaiabinadabrochasegura5972 7 років тому
Hi, very good job. I have a question about how you connect the Kinect to the voltage, you used some battery? Or some current inverter?
@matlabbe
@matlabbe 7 років тому
On this demo, it is a Xtion Live Pro, which is powered by USB for convenience. For a Kinect v1, we could cut the wire and plug into a 12V dc output directly on the board of the robot.
@user-gs8qm9rw6r
@user-gs8qm9rw6r 5 років тому
nice work! i love this video
@kiefac
@kiefac 7 років тому
It seemed to throw away a lot of points after they were out of the FOV of the camera. Is that to prevent distance inaccuracies or keep the performance up or smth?
@matlabbe
@matlabbe 7 років тому
We keep the map downsampled for rendering performance. Maybe with new GPUs, we could keep the map more dense while keeping smooth visualization. We can see at the end of the video, the rendering frame rate is already lower than at the beginning.
@kiefac
@kiefac 7 років тому
matlabbe ah alright.
@mattizzle81
@mattizzle81 4 роки тому
Point clouds are very memory intensive! I am doing a similar type of pointcloud mapping on Android, using ARCore. One of the first things I noticed is how hard it is to keep that many points. If I compute points for an entire camera frame, and try to keep them, the device would run out of memory after about 30 seconds. Luckily all I need really is a birds eye view perspective, so I project the points to a 2D image and that works fine.
@suzangray6483
@suzangray6483 6 років тому
Hi, What is your robot acting on? So is it moving according to the laser data or camera data or are you checking with the remote control? I also have a laser scanner and I can get 3d the image of the environment and the distance of the nearest object . But I want to communicate this with a tool like yours. I'm very happy if you can help me how to do it. Thank you
@matlabbe
@matlabbe 6 років тому
The robot is tele-operated using a gamepad. The best way to communicate the data you have to rtabmap is to use ROS and publish the right topics. See this example wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_Odometry_.2B-_2D_laser for more info!
@suzangray6483
@suzangray6483 6 років тому
I understood. I have 3dm-gx1 ımu. I want to get odometry messages with IMU instead of encoders. But as far as I know, linear velocity data to x and y axis is not obtained with IMU.How can ı measure odometry values with IMU.?
@matlabbe
@matlabbe 6 років тому
It is possible (integrating two times the acceleration), but it will have a lot of drift. You said you have a laser scanner, you can use it to get x,y parameters, or estimate odometry only with the laser scanner (like hector mapping). Here is another setup here: wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_2D_laser
@timurtt1
@timurtt1 9 років тому
Excellent demo! Coud you please clarify - how do you handle new scene points descovered by the robots? Do you add all of them into you scene or do you perform some sort of smart merging? What is the "add point to the scene"-rate you have?
@matlabbe
@matlabbe 9 років тому
Anton Myagotin The map's graph is filtered to keep around 1 node/meter, then a point cloud is created from each filtered node. In the visualization above, some clouds are effectively superposed.
@fighterknigh
@fighterknigh 8 років тому
Great job, btw, how do you find the object? Do you build the object point clouds model before searching?
@matlabbe
@matlabbe 8 років тому
+余煒森 Objects are found using RGB images. Visual features (SURF) are extracted from images of the books, then they are compared to the live RGB stream to find the same visual features.
@magokeanu
@magokeanu 7 років тому
amazing dude!
@VicConner
@VicConner 8 років тому
Amazing!
@kapilyadav23
@kapilyadav23 9 років тому
Hi... can u tell me what processor or dev-board are you using to process the kinect and lazer data.. ? btw... nice video.. impressed with your results.... :)
@mathieulabbe4889
@mathieulabbe4889 9 років тому
kapil yadav It is a mini ITX with an i7 + SSD (no GPU). It is running Ubuntu 12.04 + ROS Hydro.
@dubmona1301
@dubmona1301 7 років тому
Way of the future.Awsome.
@ripleylee5726
@ripleylee5726 4 роки тому
Hi may I know if you have done any project with intel realsense D435i Camera before?
@matlabbe
@matlabbe 4 роки тому
D435i is integrated in RTAB-Map standalone application (for hand-held mapping). It can be also used like the kinect above with rtabmap_ros package. Note that any RGB-D cameras and stereo cameras can be used with rtabmap_ros right know, if they comply with the standard ros interface for images.
@barthurs99
@barthurs99 6 років тому
Oh man this is prefect for what I'm doing I making a robot with room mapping that will be like a security guard but I'm using Li-dar mapping and camera object recognition and object follow and facial recognition and you use some of that right?
@matlabbe
@matlabbe 6 років тому
In this demo, SLAM is done with lidar and rgb-d camera from "rtabmap_ros" package and books are detected using "find_object_2d" package. There is no face detection or object following here. cheers!
@ayarzuki
@ayarzuki 2 роки тому
@@matlabbe what if we combine with object detection using camera?
@sylvesterfowzan5417
@sylvesterfowzan5417 5 років тому
i'm working on a humanoid robot currently we need to perform navigation and perception could you help us what hardwares and how to perform using ROS?
@matlabbe
@matlabbe 5 років тому
The current documentation is on ros.org: wiki.ros.org/rtabmap_ros/Tutorials, when you have specific questions, you can ask them on ROS answers (answers.ros.org/questions/) or for on RTAB-Map's forum (official-rtab-map-forum.67519.x6.nabble.com/)
@DerekDickerson
@DerekDickerson 9 років тому
matlabbe so you must have a laser scanner as well outside of the kinect?
@matlabbe
@matlabbe 9 років тому
It is not required, but increase the precision of the mapping. In this video: ukposts.info/have/v-deo/l6GagHeOp21y0oU.html , only the Kinect is used.
@masahirokobayashi911
@masahirokobayashi911 6 років тому
What kind of sensors do you use?
@matlabbe
@matlabbe 6 років тому
On this demo: URG-04LX, Xtion Pro Live and wheel odometry from the robot.
@amirparvizi3997
@amirparvizi3997 7 років тому
how did i download this video
@Uditsinghparihar
@Uditsinghparihar 5 років тому
Go to:- en.savefrom.net/1-how-to-download-youtube-video/ Then paste the url of any youtube video (in your case this video's url) in the box.
@xninjas3138
@xninjas3138 3 роки тому
There use arduino?
@matlabbe
@matlabbe 3 роки тому
a (2009 if I remember) Intel NUC is on the robot, the drive motors use custom boards
@AndreaGulberti
@AndreaGulberti 8 років тому
OMG
3D Mapping and Navigation using Octrees on a Quadrotor
4:53
Ibrahim Musba
Переглядів 68 тис.
Outdoor stereo SLAM with RTAB-Map
10:45
matlabbe
Переглядів 109 тис.
Піхотинці - про потребу у людях
00:57
Суспільне Новини
Переглядів 821 тис.
LIVE - Парад Победы в Москве 9 Мая 2024
2:27:56
AKIpress news
Переглядів 2 млн
McDonald’s MCNUGGET PURSE?! #shorts
00:11
Lauren Godwin
Переглядів 32 млн
Kitten has a slime in her diaper?! 🙀 #cat #kitten #cute
00:28
SLAM for the robot Navigation and Position by Inmotion
5:20
Allen Chen
Переглядів 118 тис.
Kinect WiFi Robot Arm
2:55
OpenMYR
Переглядів 30 тис.
DSO: Direct Sparse Odometry
5:09
cvprtum
Переглядів 109 тис.
3D Scanning of a Cottage with a Phone
3:34
matlabbe
Переглядів 117 тис.
RPLidar and Hector SLAM for Beginners | ROS Tutorial #8
9:41
Tiziano Fiorenzani
Переглядів 147 тис.
ROS Autonomous Navigation & Object Avoidance
2:41
Thomas Gilmour
Переглядів 46 тис.
3D Scan with Xbox Kinect and K-Scan: Beginners Tutorial
10:11
3D Revolution
Переглядів 326 тис.
ElasticFusion: Dense SLAM Without A Pose Graph
4:02
Dyson Robotics Laboratory at Imperial College
Переглядів 64 тис.
Autonomous Human Following Robot
0:55
Volkan Sezer
Переглядів 4,6 тис.
wyłącznik
0:50
Panele Fotowoltaiczne
Переглядів 853 тис.
Добавления ключа в домофон ДомРу
0:18
Игровой ноутбук за 100тр в МВИДЕО
0:58
KOLBIN REVIEW
Переглядів 361 тис.