|
CS 371 - Cozmo Project Documentation
|
Introduction:
The goal of this project to simulate the "Kidnapped Robot Problem" via implementing Monte Carlo Localization using an Anki Cozmo robot. It uses the built-in camera and movement options of the robot to gather optical information from its surroundings, which it will then use when it is "kidnapped" to re-find where it is.
We originally started with a bit of help from a prior class's work, and their credits (as well as ours) are listed in the corresponding section.
Goals:
- Learn how to create programs for a Cozmo robot using Python
- Improve upon the Monte Carlo Localization implementation of the previous group
- Speed up computation of previous group's implementations through rewrites and optimization
- Successfully implement Monte Carlo Localization using a Cozmo unit
Localization Process Outline:
- 1. Rotate around 360 degrees in set intervals and take images of its surroundings
- 2. Take previous images and "stitch" them together to construct a panorama shot of its surroundings
- 3. Rotate the Cozmo robot some random number of degrees between 0 and 360 (incl/excl) to "kidnap" it, scrambling its sense of where it is relative to its panorama shot
- 4. Perform Monte Carlo Localization in order to determine where the robot ended up turning to
Computer Setup:
- 1. Install the latest version of Python: "https://www.python.org/downloads/"
- During installation, tick the "Add Python to Path" checkbox on setup screen on the first screen of the installer window
- "Install"
- 2. Install the Anki Cozmo SDK and example programs: "http://cozmosdk.anki.com/docs;
Follow the instructions listed in 'Initial Setup', 'Installation - [Your Type of OS]', and 'Android Debug Bridge'.
- Here is a list of module versions that are known to work with each other:
- Linux:
- cozmo 1.4.10
- Pillow 9.1.0
- numpy 1.21.0
- opencv-python 4.5.5.64
- pandas 1.3.5
- numba 0.55.1
- Windows:
- cozmo 1.4.10
- Pillow 9.0.1
- numpy 1.21.6
- opencv-python 4.5.5.64
- pandas 1.4.2
- numba 0.55.1
- We didn't have the opportunity to run our code on a Mac system, so default to one of these setups if you decide to try it. Tinker with it if they don't work, otherwise.
- Note: if you use Windows, use the "python" command instead of the "py" and "pip3" command instead of the "pip" command wherever recommended, as that seems to call the wrong version of Python, meaning it won't have access any of the modules you'd just installed. Figuring that out was a source of a lot of headache for us early on. For Linux, the guides should recommend you use "python3" and "pip3" commands, but in the event that they don't, use those instead.
Mobile Phone Setup:
- 1. Enable USB debugging on your phone.
- The process of enabling this setting differs on from phone to phone. You can either find instructions specific to your device on Anki's website, or you could look up how to do so for your own device if that fails.
- 2. Run a USB cable from your computer to your phone.
- In a terminal window, type in the command "adb devices" to verify the connection between the two devices. In a terminal window, type in the command "adb devices" to verify the connection between the two devices.
- 3. Install the official Cozmo app onto your device.
- With the Cozmo app running, turn the robot on and connect your phone to the WiFi network of the robot. The Cozmo unit will host its own server for wireless communication and have the password for the network displayed on its screen.
- 4. Within the app, go to the Main Menu > "Settings" > "Cozmo SDK" > "Enable SDK"
- Before you do that, we recommend you also mute the robot within the Settings sections, as it's pretty loud.
- 5. With all of that set up, execute your Python scripts via the terminal window and the robot should respond.
Code:
Here is a .zip file containing our code.
Conclusion
Our implementation wasn't able to get the robot to fully localize. We believe we have a better implementation of Monte Carlo Localization than the last team's version, but we also believe we've given our algorithm too little optical information to accurately determine where it is, only giving it a column of pixels' worth of information rather than a whole image. The reason we did that is because we weren't able to find a way to have the image sampling wrap around if it begun near the end of the panorama image. To do that, we'd need to tack on an "extra image" onto the end of the panorama image.
On a more positive note, it does run through the algorithm much faster than before. We've applied image downsampling to the panorama shot and the reference images that the robot takes during its localization process in order to cut down on the computational load of the localization algorithm. In addition, we've implemented binary search on the new population member selection process, as we noticed that the old iteration was terribly slow at picking new members. Finally, we've also used the "numba" module of Python to run the image processing code on the user's device's GPU, allowing us to do the computations faster than on its CPU.
Future improvements:
First off, the next group to work on this should find a way to get the wraparound working, or find some other way to do it. One idea we had ocme up with (but didn't have time to test) was to convert both a panorama image and a little bit of duplicate data from the start of the panorama into arrays and concatenating them together somehow, since you can't stick 'em together with opencv's Stitcher this way. Then, you could take samples of the size of a standard Cozmo camera image.
Second, there are probably some issues with the way we calculate the final best guess at the end of the localize method. It's also largely untested and we're unsure if it reliably calculates the median point of the remaining members of the initial population, and you might need to fiddle with the cutoff to either keep more or less members to get a more accurate final result.
Another thing you could do is some image manipulation to make the panorama shot's left side actually represent the point where the robot looks while taking its first "stitchpic"; i.e., "0 degrees". Currently, the approximate center of the image is the "0 degree point." Failing that, you could possibly put a marker on the panorama image itself to make it clearer to the user where the robot believes it's looking.
Once all that is done, further optimize the code. Fast (working) code is always good.
Credits:
Authors of current iteration: Charlie Dale, Raquel Delgado, and Ben DelBaggio
Authors of first iteration: Nick Weinel, Charlie Stewart, Matt Ainsworth, Jake Poff, and Parker Sorenson