CS 371 Cozmo Project Documentation

Introduction

Our group programmed a Cozmo robot to be able to localize itself using the Monte Carlo Localization algorithm.

Goals

Robot Localization Steps

Software Used

PyCharm -- Used for python coding.

Jupyter Notebook -- Used for python coding.

Set Up

For Windows:

For Android:

Final Installation step for Mobile Setup:

Run Some Example Programs

  • It is a good idea to run some more codes to test and ensure you are setup correctly. For the convenience of this document we only test the "hello world" code because it is iconic.
  • Code:

  • Click the link to access the code:
  • mcl code

    Testing

    To ensure the image slicing method worked, we used the IDE Jupyter Notebook with the Python language. We uploaded the panorama we obtained from Cozmo and took advantage of how user friendly and quick Jupyter Notebook is when displaying outputs.

    To test the robot, we first picked out an area that would synchronize well with the panorama. A good location would include a place with good lighting, no moving objects, and a lot of variation within the environment. Cozmo would then execute his panorama making procedure and create the map. The MCL would run with the created map and interval inputs of images taken at varying degrees. MCL uses particles that represent a distribution of localization probabilities. To test where the cozmo is localized, we output the median particle after 10 iterations and use that particle location to determine where it is. If that location is on or near the actual final angle of the cozmo, we consider that a successful test. The pixels are converted to angles by taking the width in pixels of the panorama divided by 360, which is equivalent to one degree. We also tested by outputting the final image the cozmo took at its final image and seeing where it lined up in the panorama. This method was faster and used for the majority of the project because it is simple to debug.

    MCL Algorithm

  • First we will have cozmo scan its environment and take picture to create a panoramic view of its environment and that image will become the map that cozmo will use
  • Then we will have cozmo move randomly, as in it will turn a random number of degrees.
  • We will take the panorama picture and lay it out in a one dimensional line such that we start at 0 degrees on the left and go to 360 degrees on the right. In the code, we use pixels as a measurement too.
  • We will then randomly assign 300 population points to a certain pixel and the pose of that population point will be an image cut from the panorama that has a center at that pixel.
  • For each of the population points:
  • Redistribute population points based on those weights
  • Keep doing this for an amount of iterations. We use 5.
  • After the iterations, we take the point with the most population points and find where it is located on the panorama.
  • We then take that value and have cozmo turn from that degree back to degree 0
  • Conclusion

    In the end we had all the pieces, however, we could not streamline them together as well as we wanted. At different points in time, certain components had worked exactly right and others a little less accurate. Part of this issue may be because we divided the implementation up between different pairs of people. And so when we combined everything together we had run into unexpected incompatible errors. In our efforts to fix them we had lost accurate function in some areas of the code, such that we changed things that ended up making other areas in the program not work as well.

    As a group we each had our roles and each of us filled that role to the best of our ability. Through our pair coding we were able to make each part of the program work. We were able to localize the robot. However, there was a lack of consistency with the starting location of the panorama. On some runs, the panorama would consider degree 0 in the middle and sometimes it would consider degree 0 at the leftmost pixel. Other than this error, cozmo is able to localize with decent accuracy assuming a well generated panorama.

    Future improvements:

    There are multiple areas in this project that we feel could be improved upon in future iterations. First off, we believe that the one of the greatest areas of improvement involves how we represent our data. With the way the program is configured currently, we must read a degree value and compare that value to the original map that we give the Cozmo. Implementing a real time GUI would allow imaging how MCL is currently working throughout time. Another area that could be improved upon is the way that we manipulate our images. We relied heavily on OpenCV and we are unsure of how exactly their module manipulates the images. Using a machine with only two cores made it difficult to manipulate many images at the same time, and sometimes introduces failure. We also believe that the way error is introduced into our population could be improved upon.

    Sample Motion Model method did not work. It returned a value that is never updated correctly. It should take into account the current pose.

    Sources: