Getting Minoru running on ROS

My first attempt to get the Minoru cam running was based on these instructions http://code.google.com/p/sentience/wiki/MinoruWebcam. However, the v4l2stereo program that they use for calibration consistently hard crashed my computer with kernel panic, so I had to abandon that approach.

Instead, I used some ROS packages for calibration based on instructions found here.

Setup and Calibration

  1. Plug in Minoru webcam
  2. run > ls /dev/video* you should see at least two devices, which correspond to each camera on the Minoru. I saw three devices, one of which was my on board webcam. Figure out which 2 devices correspond to the minoru.
  3. Install the umd-ros-pkg library from [ww.ros.org/wiki/umd-ros-pkg here]. To do so add the provided repository locations (Step 2 from http://www.ros.org/wiki/umd-ros-pkg) to your .rosinstall file and run >rosinstall ~/ros this will download the necessary code.
  4. Run > rosdep install camera_umd to install system dependencies for camera_umd (part of the umd_ros_pkg)
  5. Run > rosmake camera_umd to compile camera_umd
  6. I had to edit roslaunch file, ~/ros/camera_umd/uvc_camera/launch/stereo_node.launch, to use the correct devices for the cameras. For example, I had to change the line <param name="left/device" type="string" value="/dev/video0" /> to <param name="left/device" type="string" value="/dev/video2" /> because the minoru used /dev/video1 and /dev/video2 .
  7. Run > roslaunch uvc_camera stereo_node.launch This gets the cameras up and running on a ros node.
  8. In another terminal run > rosrun camera_calibration cameracalibrator.py —size 9x6 —square 0.024 right:=right/image_raw left:=/left/image_raw left_camera:=/left right_camera:=/right This should bring up a calibration screen. —size 9x6 gives the number of squares in the pattern. —square 0.024 gives the size of the squares in meters. Hold up the calibration checkerboard (found here) You need to hold it fairly close and steady for program to pick up the calibration board (it will draw a pattern on the screen showing that the calibration pattern is recognized). The calibration button should become active once the pattern fills most of the image. When the button is active click 'calibrate', the 'save' and 'commit' buttons should now be active. Click 'save' to save the calibration parameters to /tmp/calibrationdata.tar.gz . Click 'commit' to exit. Unpacking /tmp/calibrationdata.tar.gz will yield ost.txt which holds the calibration data, it needs to be massaged into ~/ros/camera_umd/uvc_camera/example-left.yaml and ~/ros/camera_umd/uvc_camera/example-right.yaml in the following format. I did this by hand, but there may be a better way. I've included a sample calibration file below.
  9. Once example-left.yaml and example-right.yaml have been created, shut down any currently running ros nodes, then run > roslaunch uvc_camera stereo_node.launch and make sure that the .yaml config files are loaded correctly, there should be no warnings about the calibration files
  10. Run > rosdep install image_pipeline to install dependencies for image_pipeline and stereo_image_proc
  11. Run > rosmake image_pipeline to compile image_pipeline

Running Minoru on ROS

  1. Make sure there are no ROS nodes running
  2. Start the ros core >roscore
  3. In another terminal start the uvc_camera node, which reads from the cameras on the Minoru, both cameras should light up. >roslaunch uvc_camera stereo_node.launch
  4. In another terminal start the stereo_img_proc node, which produces the disparity map by running > ROS_NAMESPACE=/ rosrun stereo_image_proc stereo_image_proc __name:=proc
  5. In another terminal start the image_view node, which pops up 3 windows, left image, right image and disparity by running > rosrun image_view stereo_view stereo:=/ image:=image_rect_color _approximate_sync:=True _queue_size:=10 . If all you get is 3 grey windows, check all terminals running ros nodes for errors.
  6. More info on the ROS stereo camera nodes is available

Example calibration file

The calibration program from step 8 will give you the the data for the camera_matrix, distortion_coefficients, rectification_matrix and projection_matrix, but it comes in a format that the uvc_camera node can't read. The data needs to be massaged into this format, I did it by hand, but there may be programs out there that do it for you.

---
image_width: 320
image_height: 240
camera_name: left
camera_matrix:
  rows: 3
  cols: 3
  data: [443.081962, 0.000000, 159.500000, 0.000000, 437.370668, 119.500000, 0.000000, 0.000000, 1.000000]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [0.000000, 0.000000, 0.000000, 0.000000, 0.0000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [0.793499, 0.068620, 0.604691, -0.047583, 0.997576, -0.050764, -0.606709, 0.011508, 0.794841]
projection_matrix:
  rows: 3
  cols: 4
  data: [775.090604, 0.000000, -219.815799, 0.000000, 0.000000, 775.090604, 129.131834, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000]