Localized Vacuum Cleaner

The objective of this practice is to implement the logic of a navigation algorithm for an autonomous vacuum cleaner by making use of the location of the robot. The robot is equipped with a map and knows it’s current location in it. The main objective will be to cover the largest area of ​​a house using the programmed algorithm.

Vacuum cleaner

Installation

Install the General Infrastructure of the JdeRobot Robotics Academy.

Enable Kobuki_msgs:

sudo apt-get install ros-melodic-kobuki-msgs

How to run

  1. Execution watching the world:
roslaunch vacuum_cleaner_loc.launch
  1. Execution of the practice and the user interface:
python2 vacuum.py vacuum_conf.yml
  1. Execution of the automatic evaluator:
python2 referee.py vacuum_conf.yml

To simplify the closure of the environment, simply close the VacuumCleaner window (s). Ctrl + C will give problems.

How to do the practice

To carry out the practice, you must edit the MyAlgorithms.py file and insert all the functionality in it.

Where to insert the code

MyAlgorithm.py

def execute(self):

    # Add your code here
    print "Runing"

    #EXAMPLE OF HOW TO SEND INFORMATION TO THE ROBOT ACTUATORS
    #self.motors.sendV(10)
    #self.motors.sendW(5)

API

  • self.pose3d.getPose3d().x - to get the X coordinate of the robot
  • self.pose3d.getPose3d().y - to get the Y coordinate of the robot
  • self.pose3d.getPose3d().yaw - to get the orientation of the robot
  • self.bumper.getBumperData().state - to establish if the robot has crashed or not. Returns a 1 if the robot collides and a 0 if it has not crashed.
  • self.bumper.getBumperData().bumper - If the robot has crashed, it turns to 1 when the crash occurs at the center of the robot, 0 when it occurs at its left and 2 if the collision is at its right.
  • laser.getLaserData() - It allows to obtain the data of the laser sensor, which consists of 180 pairs of values ​​(0-180º, distance in millimeters).
  • self.motors.sendVelocities(vel) - to set linear and angular velocity. Object ‘vel’ must be a CMDVel():
    • vel = CMDVel()
    • vel.vx = v - linear velocity
    • vel.az = w - angular velocity

For this example, it is necessary to ensure that the vacuum cleaner covers the highest possible percentage of the house. The application of the automatic evaluator (referee) will measure the percentage traveled, and based on this percentage, will perform the qualification of the solution algorithm.

Types conversion

Laser

laser_data = self.laser.getLaserData()

def parse_laser_data(laser_data):
    laser = []
    for i in range(180):
        dist = laser_data.values[i]
        angle = math.radians(i)
        laser += [(dist, angle)]
    return laser
def laser_vector(laser):
    laser_vectorized = []
    for d,a in laser:
        # (4.2.1) laser into GUI reference system
        x = d * math.cos(a) * -1
        y = d * math.sin(a) * -1
        v = (x,y)
        laser_vectorized += [v]

    laser_mean = np.mean(laser_vectorized, axis=0)
    return laser_mean

Demonstration video

A reference solution


Theory

As the problem may have multiple solutions, only the theory behind the reference solution has been covered.

Conversion From 3D to 2D

Robot Localization is the process of determining, where robot is located with respect to it’s environment. Localization is a an important resource to us in solving this exercise. Localization can be accomplished in any way possible, be it Monte Carlo, Particle Filter, or even Offline Algorithms. Since, we have a map available to us, offline localization is the best way to move forward. Offline Localization will involve converting from a 3D environment scan to a 2D map. There are again numerous ways to do it, but the technique used in exercise is using transformation matrices.

Transformation Matrices

In simple terms, transformation is an invertible function that maps a set X to itself. Geometrically, it moves a point to some other location in some space. Algebraically, all the transformations can be mapped using matrix representation. In order to apply transformation on a point, we multiply the point with the specific transformation matrix to get the new location. Some important transformations are:

  • Translation

Translation of Euclidean Space(2D or 3D world) moves every point by a fixed distance in the same direction

  • Rotation

Rotation spins the object around a fixed point, known as center of rotation.

  • Scaling

Scaling enlarges or diminishes objects, by a certain given scale factor.

  • Shear

Shear rotates one axis so that the axes are no longer perpendicular.

In order to apply multiple transformations all at the same time, we use the concept of Transformation Matrix which enables us to multiply a single matrix for all the operations at once!

Transformation Matrix

Transformation Matrix

In our case we need to map a 3D Point in gazebo, to a 2D matrix map of our house. The equation used in the exercise was:

Coordinate to Pixel

Coordinate to Pixel Conversion Equation

In order to carry out the inverse operation of 3D to 2D, we can simply multiply, the pixel vector with the inverse of the transformation matrix to get the gazebo vector. The inverse of the matrix exists because the mapping is invertible and we do not care about the z coordinate of the environment, implying that each point in gazebo corresponds to a single point in the map.

Coverage and Decomposition

After the robot is localized in it’s environment, we can employ decomposition techniques in our algorithm, to deal with the actual coverage of the surroundings. There are lot of decomposition techniques available for our use. The Decomposition Algorithm, decomposes the map into seperate segments, which our robot can cover one by one. Decomposition can be directly related to Graph Theory, where the segments are taken as nodes and the edges connecting nodes depict that the adjacent segments share a common boundary. The robot can path plan to the nearest node and then start the sweeping again! Most of the details regarding decomposition would be implementation

Decomposition and Graph Theory

Adjacency Graph

Travelling between segments

Once, a certain segment has been swept. In order to reach the next segment(preferably nearest one) there are again a multitude of path planning algorithms. The algorithm used in the reference solution is the Visibility Algorithm. As the name suggests, visibility of the target is the basic building block. So, let’s define visibility first off all!

If a straight line exists between two points, and the line does not pass over obstacles, the two points are said to be visible to each other. Mathematically speaking, the two points under consideration must satisfy a common equation.

Equation of a Line

Equation of a Line

Starting from the destination cell, the robot can map it’s path one cell at a time, while keeping visibility as a reference, until it reaches the current cell, the robot is present in. Once, the plan has been decided the robot can follow that path and reach it’s destination.

Error of Obstacles

Visibility Error

But, sometimes due to close proximity to corner, the robot may collide with it. To avoid the occurence of any such event, we may also consider dilation techniques, since the map used is a binary image. Erosion is a Morphology Function that erodes away the boundaries of foreground object(white is the object in foreground). Refer to this link for more information on erosion.

Morphological Operations

Morphological Operations

Hints

Simple hints provided to help you solve the vacuum_cleaner_loc exercise. Please note that the full solution has not been provided. Also, the hints are more related to the reference solution, since multiple solutions are possible for this exercise.

Grid Representation

Due to the involvement of Localization in this exercise, we are given a map of the surroundings of the robot, which it needs to cover. The map is present in the directory path resources/images/. But since, we are using path planning and decomposition algorithms. These are very easy to implement on a grid like structure. Although any data structure, representing adjacency graph would suffice. See the illustrations for what happens without using any data structure(a brute force approach). To implement grids on images, we need to apply basic image processing, which can be accomplished easily using the opencv library.

The first step would be to apply erosion function on the map image. This is done in order to ensure some degree of distance between our robot and obstacle during runtime.

The second task is to come up with a transformation matrix, that would convert our 3d gazebo positions into 2d map representations. A degree of rotation has to applied on the Y axis and add a translation of 0.6 on the X axis and -1 on the Y axis can be used to get the origin of coordinates (0, 0) of the image in the upper left hand corner.

Once we get specific positions on our map. Using the dimensions of our vacuum cleaner, we can generate grid cells around those points. The size of vacuum cleaner can be taken as 16x16 pixel units. Take it as an exercise to calculate it on your own!

Obstacles and Color Coding

Taking the map as a grayscale image, we can apply color coding to the map as well. It is convenient to seperate Obstacles, Virtual Obstacles, Return Points and Critical Points

In the context of this exercise,

  • Obstacles: The obstacles in our map that our robot cannot go over
  • Virtual Obstacles: The grid cell that our robot has covered.
  • Return Points: Starting points of the decomposed cells
  • Critical Points: The points where our robot has stuck between obstacles and virtual obstacles. It is from critical points, that our robot has to start moving towards the next return point.

Checking for Points

Defining different points simplifies the algorithm as well!

  • Return Points Return Points can be checked while the robot is in zigzag motion. Empty cells near the robot can be classified as return points. The list of Return Points has to be kept dynamic in order to insert and pop the points whenever required.

  • Critical Points Critical Points can be classified whenever our robot is stuck around Obstacles or Virtual Obstacles and cannot move any further.

  • Checking for Arrival of Cell Considering various offset errors in the real world(consider simulation to be real world as well!). Arrival of the robot in a particular cell should be considered within a margin of error, otherwise the robot may start oscillating, in the search of a cell on which it can never arrive.

  • Efficient Turning and Straight Motion One trick to adjust the speed and direction of motion is to keep the next 3 cells of the robot which are in it’s direction of motion under consideration.

  • All 3 cells are free Make the robot move at maximum speed
  • 2 cells are free Make the robot move at a fast speed
  • 1 cell is free Along with small linear motion forward, start applying rotation as well
  • 0 cells are free Stop the robot and apply only rotation

As a final note, quite a lot of tips and tricks regarding implementation have been discussed in this page. This is a tough exercise, which may take quite a lot of time to solve. The main objective of the exercise is to cover a significant area of the house, without taking time into consideration.

Illustrations

Grid Based

Grid Based Approach

PID Based

PID Based(without grid) Approach

Contributors

References

The major credit for this coverage algorithm goes to Jose María Cañas and his student Irene Lope.

1 2 3 4 5 6

One of the best video series on Linear Algebra