Wednesday, November 16, 2022

A tutorial on Motive / OptiTrack

In robotics, motion capture can be used for creating accurate localization within a defined space. This can be used for evaluating the responses of systems or for ease of development and real-world testing of path planning algorithms among many other things.

A recent popular example of those that have used motion capture for robotics can be seen in StuffMadeHere's video on his latest iteration of the robotic basketball goal.


Recently, at my University a motion capture system was installed in one of the labs and I was shown how to use it. The system is made up of several cameras installed into the ceiling of a regular classroom connected to a Windows computer. Specifically, the system is made up of several OptiTrack Flex 3 cameras.

The system has an available ROS/ROS2 driver that can be seen here:
https://github.com/ros-drivers/mocap_optitrack

TLDR: mocap is cool

Quick-Start Guide:
1. Make sure that the power strips along the wall next to the lab computer are turned on.

2. On the lab computer after logging into your student account, launch Motive:


Something like this should appear:

2. Place the silver reflector balls on the body that you want to track. The reflectors should be visible as orange points within Motive.
3. Left-click drag to select the placed reflectors. Once selected, right click and create a rigid body.


Creating a rigid body:

4. Once the rigid body is created, in order to access the pose / position of the rigid body with the ROS driver, network streaming needs to be turned on within Motive. To do this, there should be an icon like the following available in the top toolbar: 


5. Once the streaming pane has been opened, be sure that broadcasting is turned on and a local network interface is selected. Within this pane, the ports and multi-cast address required for the ROS driver are hidden within the advanced section of the streaming pane.


6. Within the advanced section, the following should be shown:


These ports and addresses align with the configs required for the ROS driver that can be seen here:



7. Be sure that you are connected to the same network as the lab computer, either by connecting to the wifi network of the router on the opposite wall or by connecting to one of the ethernet ports on that same wall. Only some of the ethernet ports work 

Notes:

- The ROS 2 driver, while only being released for ROS 2 foxy, can be built from source and has been tested with ROS 2 Humble.

- If there is a lot of other lighting in the area or if a reflector can only be seen by a couple of the cameras the location of the reflector may seem to noisy.

- Further documentation for Motive can be seen here: https://docs.optitrack.com/



Saturday, July 23, 2022

Ground Speed Sensor development part 1

Preface:

A ground speed sensor is a sensor that measures the speed of a ground vehicle with respect to the ground underneath of it. In this context, the speed is 2 dimensional. 

This sensor measures directly what normally we would be inferring from a variety of sensors such as wheel speed, RTK GPS, etc.

The influence behind this has primarily come from Henri Norden's thesis on a ground speed sensor developed for his formula student car to emulate sensors such as the Kistler Correvit SFII. The reason why I am trying to develop an iteration of this sensor is for getting the ground speed for the Autonomous Go-Kart I lead. 

The initial work:

Previously, I had been focused on sourcing sensors and other hardware for this project however I also started to think how we could actually start the work of developing the software for this sensor that would be running on what will probably end up being a Jetson. For this, I started thinking about the actual output that the camera creates, of which really is mostly all the same texture and is something we could fairly easily simulate with opencv and some asphalt textures from online.

Here is what the initial simulation looks like:


In addition to this simulated camera output, we need to be able to output a ground-truth velocity reference. To do this, we are currently thinking that we might as well reference our velocity to a real-world velocity and use a real-world reference image for our tiled surface. In turn, we decided that we need to create a jig to hold a camera and that camera will be taking a picture of flat ground with a reference object (like a penny) in frame so we can accurately estimate scale.

we may be able to even use an overhead camera jig like this:






Next we need to be able to command a path and velocity for our virtual camera to travel along and be able to visualize within the camera frame the ground truth velocity vector I think with the length of the arrow showing magnitude and the direction of the arrow mirroring the direction of the moving box.


This red arrow could be the ground truth versus say a blue arrow showing the machine vision derived velocity velocity.

A tutorial on Motive / OptiTrack

In robotics, motion capture can be used for creating accurate localization within a defined space. This can be used for evaluating the respo...