jump to navigation

Walk the Line May 20, 2011

Posted by embeddedmotioncontrol in Uncategorized.
trackback

We have nearly finished the software to make our rover follow a line, so its about time to write about how we got to it. The goal is to make the robot follow a line of any color or width on a flat surface. Also, the robot should tell us how long the line is as precise as possible. This is done in competition with other groups, the most accurate measurement is worth a case of beer, so we are doing our very best to perfect the measurement.

First off we started playing with the Lego Mindstorms light sensors our rover is equipped with. Soon we discovered that the difference between values measured on surfaces of different brightness are not that big and they depend a lot on lighting conditions, so edge detection based on an absolute threshold value cannot be used. Instead, we should look at the difference between two subsequent measurements to determine whether an edge is crossed or not.

First version

The first version of the software does just that. It uses two light sensors positioned above the surface just outside of the line. Every iteration the measured brightness value is stored so it can be used for comparison with the value measured in the next iteration. When a transition from light to dark or vice versa is detected this means that the sensor is now above the line and the robot should steer to get the sensor above  the background again. This version works fairly well, but it is based on instantaneous measurements and is therefore very prone to outliers. Also, it does not make a distinction between transitions from light to dark and from dark to light. It assumes a known starting position and assumes that every edge detection is caused by the sensor moving over the line when it was over the background before or moving over the background when it was over the line before. Obviously, one false edge detection leads to the robot not knowing if the sensor is over the line or over the background any more.

Second version

The second version was a little more elegant. The sensors are moved so they are both above the line when the robot is tracking it. Before starting the program the robot is placed a little distance before the start of the line from where it starts moving. The first edge detection of either sensor is assumed to be the sensor moving over the line. The length measurement is started and the transition direction, dark to light or light to dark, is stored. Lets assume the line is lighter than the background. When both sensors are above the line, the robot moves forward. When a light to dark transition is measured, the robot applies a correcting steer action until a dark to light transition is seen. Now the robot is above the line again and can continue to move forward. When both sensors detect a light to dark transition the end of the line is detected and the measurement stops. This version of the software works nicely when the difference in brightness between the line and the background is big enough. It is however still based on instantaneous measurements, so outliers are still a problem. In practice this does not seem to be a big problem, but it leads to a compromise in iteration time. Too short and the measurement difference is not big enough leading to missed edge detections, too long and there is a risk of overcorrection.

Third version

To deal with this problem and to be bit more robust against outliers, and also with the mars mission in mind, a third version of the program is developed. It is the same as version 2, but the edge detection works a little different. Every iteration a measurement is done and saved. Also, a moving average of the last 5 measured values is saved. After this, edge detection is performed on the moving average instead of the instantaneous value. Because the moving average climbs up slowly when a transition is encountered, the current moving average is compared to the moving average of 8 iterations ago. This makes edge detection using moving averages slower than edge detection using instantaneous values, it is always 8 iterations too late, but outliers have a much smaller impact. An outlier should be 5 times the threshold bigger than the average value to lead to an erroneous edge detection. To account for the slowness of the detector a shorter iteration time could be used, to achieve a reaction time of 400 ms like in version 2 of the software, an iteration time of 400/8=50 ms should be maintained. This version of the software is not tested on the robot yet, so if this is realistic has still to be found out. To be continued.

Comments»

No comments yet — be the first.

Leave a comment