LEGO Mindstorms Ev3: Pathfinding Build

I’ve developed a forward aggressive pathfinding algorithm for my Ev3 build, using RobotC. The build is a modified version of the Ev3rStorm flagship model. It took about 4 hours to build. I’ve placed a touch sensor on the right shoulder, and also a medium motor on the left arm which will be used to power a gripper.

The Ev3rStorm is fairly stable considering its height because it skates, rather than steps. The two large motors behind the shields power its legs. I’m only running them at 30% speed so the steps of the algorithm are clear to the observer.


Here’s the IR sensor:

IRsensor front

The Ev3rStorm IR sensor in set to proximity mode–it stops when it’s roughly 6 inches from a detected obstacle. It then tries to turn left or right (determined by rand() ) and proceed forward. If both ways are blocked, it backs up for 3 seconds and then tries left or right again, and forward.

The touch sensor is used to stop the robot.

//set flags for obstacle, right, left to FALSE
//seed srand() with current time
//while the sensor touch value is not set:
//while the IR distance is less than 30 (value set to approximate 6 inches):
//turn on the motors and go forward
//obstacle detected? toggle obstacle flag : continue

//if obstacle && sensor wasn’t pressed, try to find another path
//roll for left or right, turn and try to go forward
//toggle left and right flags
//if there’s still no way, back up for 3 seconds and roll for left and right again
//continue until sensor button pressed

This algorithm is effective in navigating simple spaces. There are some limitations to the IR sensor, however. It has trouble detecting reflective and very dark objects. Additionally, the distance is detected from the high standing height of the Ev3rStorm, making it impossible to detect low, small obstacles.

There are several ways that I’d like to improve on this build:

  1.  Add an additional, low-mounted Ultrasonic sensor for improved low obstacle detection.
  2. Implement gripper functionality on the left arm.
  3. Add an extra step to the algorithm to react when an object is farther away, and attempt to change course slightly instead of just charging forward until the obstacle is unavoidable.

In the future, I plan to build a low-mounted, modified SpiK3r for improved stability and obstacle detection. I’ll also be mounting a gripper in the front. It would be an interesting experiment to attach an additional sensor to the tail, and have the tail attached to  a medium motor so that it can rotate and continually scan the environment from a higher perspective. Using two visual sensors in tandem would allow the SpiK3r to smoothly navigate an obstacle-laden terrain, and also look ahead for its target or targets.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s