The Code

The most challenging, and interesting, aspect of this challenge was actually delving into the Lejos programming language and attempting to convert the more familiar methods of coding into the new subsumption based architecture. Along the way, we discovered more than a few pit-falls. We have included, below, a list discussing some of the more important potential problems we learned as a guide for future groups so they can have some clues as to what to look out for and what might be making their robot act strangely.

Finally, we get to our actual code. To begin our work, we drew heavily from sample code included with the Lejos system using the new code syntax and behavior structure. Fortuitously, this code was even for a line follower, though crude and which only used a single light sensor input. We drew from this sample code to expand it into a more robust line follower that provided output as well as took advantage of multiple sensors. We further tweaked the various values (and added some new ones) to help optimize the follower concept into a functional line follower for our specific situation. After this, we crafted an additional behavior to handle intersections as per challenges 2 to 3. The code itself is can be viewed below and is extensively commented to help make it clear and easy to follow. Below, we provide a more detailed overview of the code and our design goals.

On the Code

Challenge 1 - Line Following: The primary strategy of this section of the code was to based off the methods used in the sample code. Namely, we had two behaviors: one that controlled behavior when a line was detected and one that controlled behavior when the line was lost. We decided that the center sensor detecting a line under the robot to be sufficient for triggering the first behavior. When the robot detected this, it would simply drive forward until it lost the line. The second behavior triggered when the center sensor did not detect a line. We drew from the sample code here to cause the robot to sweep its immediate area, rotating progressively larger areas in hopes of running its center sensor over a line, after which it could continue onward. However, we built upon the rather simplistic design of the sample code to account for multiple sensors. To do this, we added code to detect whether one of the robots side sensors was detecting a line. If so, the robot was told to just turn in that direction, as randomly sweeping the area would be foolish. Further, if the robot's other sensors pick up a line during its sweep, it continue the sweep in the direction of the sensor that detected the line. This made the robot a bit more efficient in cases where its center sensor would miss a line and cause the robot to make two additional sweeps (one in the opposite direction and then a larger one back in the original direction) and also made the robot a bit less "twitchy" overall.

Challenge 2 - Intersection Detection: For challenge two, we created and added a new behavior, called intersection that triggered when all three light sensors detected a line. We chose against triggering the intersection code when only two sensors picked up a line, because ninety degree turns would cause an "intersection" to be detected, despite not actually being one. We then ordered the robot to simply stop and not do anything. Initially, we had issues with this code because the robot would try turning and manage to turn just so that all its sensors landed on the same line segment, triggering the intersection code. We eventually fixed this by separating the sensors slightly, as discussed in the robot design page.

Challenge 3 - Intersection Choosing:For challenge three, we modified the intersection behavior by adding an integer value n variable of type int that told the robot how many spokes it should take. (See the challenge for details on the challenge.) We further added additional code to make the robot dash forward briefly when it detected an intersection so that its axle would lie overtop the intersection. After this, the robot rotated a little under 180° and started making short pivots clockwise. Every time the center sensor found a line, it removed one from the variable. When the variable reached zero, the behavior terminated and the robot continued along the line. We ran into two primary problems here. The first was turning the proper distance. The turn degree of the robot seemed fairly arbitrary of the value given to it, and the API did not indicate what units the pivot method took. A such, we sort of guessed and found that, at a given value, running the pivot command twice approximated what we were looking for. The second problem was with the behavior itself. We found that the robot would run it once, but then stop at the dash forward part of the behavior for every subsequent instance until the robot was restarted. This ultimately was solved y creating a second variable called initN which held the number of spokes the robot should go to initially, i.e., before it started looking. At the start of the behavior, this value was reset and seemed to solve the problem.

Read the Code