Anki Cozmo Localization

Project Overview

Gettysburg College Computer Science

CS371 Spring 2023

Authors: Binh Tran, Owen Gilman, Spencer Hagan, Ben Durham

Goals

We set out to program the Anki Cozmo robot to be able to localize itself rotationally using Monte Carlo localization. As such, we should be able to do the following: have the robot familiarize itself with its environment, including noting a specified “home location,” rotate it arbitrarily such that it is no longer facing the home position, and the robot should be able to rotate itself back to the home position after doing some sort of process to figure out where it is.

Materials Needed

Software Prerequisites

Setup Instructions

Brief Overview of Our Work

In this project, we write a program to help localize the Anki Cozmo robot rotationally. To do this, we first take many pictures of the robot's environment to help it "get its bearings." The first picture it takes denotes its "home position" Then, after a random rotation, either by external intervention or programmatically, the robot should be able to take pictures to try to figure out where it is. From this information, it should then rotate back to its home position.

We apply Monte Carlo localization in the robot's attempt to find where it is. As such, we start with a randomly generated population of possible rotational positions. Then, based on the robot's movement and the image difference with its current location, it makes a new generation of possible positions, with emphasis on the locations that seem most likely, as determined by the image difference between the robot's current position and each of the possible positions.

Our Code and How to Use It

We have our code available here for direct download as python files such that they can be viewed in the browser:

ImageGathering.py

SlidingWindowLocalize.py

However, we recommend getting the jupyter notebook versions. They can be accessed via:

ImageGathering.ipynb

SlidingWindowLocalize.ipynb

(But these are not in browser friendly format, so we recommend downloading them and opening them in VSCode).

 

Our code can also be downloaded from this repository or cloned using the following command: git clone https://github.com/bendurham441/cs371-cozmo-fourthhour.git. We suggest that future groups fork this repository to have their own working copy.

Image Gathering

One should first run the ImageGathering.ipynb which creates an images/ directory and populates it with images. Note that you need to update the parent_dir variable in the second code block of this file to reflect the actual file path on your computer. After this, this file can be run from top to bottom as a whole. As stated before, the first image taken becomes the robot's home location, so place the robot according to which direction you want to be the home position.

Sliding Window Localize

After the images/ folder is populated, one can now run the contents of SlidingWindowLocalize.ipynb. Note that this file contains a programmatic random rotation of the robot for testing purposes, but is commented out to allow for manual random rotations of the robot. In its current state, the robot will wait for the user to rotate it, then it will start the localization process.

As the robot continues to iterate, this file will print out histograms to the Jupyter Notebook output showing the robot's beliefs about where it is currently facing, which can be useful in seeing the robot zero in on its actual location. These histograms simply show how many locations in the current population are in each "bin."

Conclusion

We have found this method to be very successful under most conditions, in multiple different environments. However, there have been some (rare) cases where the method fails and rotates back to a position that is not the home position.

Future Steps

It is possible that our approach would not be as successful in other environments. We suggest tuning some of the various parameters including