Sensor Fusion based on Multi-Self-Organizing

Sensor Fusion based on Multi-Self-Organizing Maps for SLAM
Gabriel Leivas, Silvia Botelho, Paulo Drews, Mônica Figueiredo and Celina Häffele
Abstract— This paper proposes the use of topological maps
in order to provide a method of SLAM feature, based on
sensor fusion, that treats better the problem of inaccuracy
of the current systems. The contribution of the work is in
the algorithm that uses multiple sensory sources, multiple
topological maps, to improve the estimation of localization, in
order to be as generic as possible, so the same is valid for
both internal and external environments (structured or not).
When this is made with sensors of clashing characteristics we
can obtain better results, because something not perceived by
a sensor might be perceived by others, so we can also reduce
the effects of error measurement and obtaining a method that
works with the uncertainties of the sensors. A simulator was
developed to validate the proposed system, through a series of
tests with a set of real data. The results show the robustness
of the system in relation to the sensorial imprecision and to
the gain in predicting the robot’s location, resulting in a more
appropriate treatment to the errors associated with each sensor.
I. INTRODUCTION
Autonomous navigation is one of main tasks executed by
autonomous vehicles. Navigation can be defined like a set
of procedures needed to a system, reach into a final configuration through a initial position, planning paths, avoiding
obstacles and into a certain time to accomplish the task. The
navigation process can be divided in five levels:
• Environment Mapping;
• Robot Localization;
• Path Planning;
• Path adaptation to time restrictions;
• Trajectory execution.
The two first stages, mapping and localization, when
made simultaneously, create a problem called SLAM
(Simultaneous Localization and Mapping) [9][10].
A. SLAM
Thrun [8] defines a SLAM problem as a situation when
the robot does not have access to map of the environment
and unknowns his pose. In SLAM, the robot builds an
environment map and simultaneously localizes itself. The
problem of SLAM is currently one of the most challenging
problems in robotics.
a) SLAM Problems:
• Online SLAM: Appears from the need of estimate future
robot pose through its current value, considering only
the variables analyzed at present time.
• Complete SLAM: Seeks estimate of future robot pose
through a whole map, instead of only the current
position on the map.
In particular, the online SLAM is the result of integration of
past poses of the complete SLAM, as shown in the Equation
1:
p(xt , m|z1:t , u1:t ) =
Z Z
Z
...
p(x1:t , m|z1:t , u1:t )dx1 dx2 ...dxt−1
(1)
b) Challenges of SLAM:
The first aspect that worsens the complexity of the mapping problem comes from the high dimensionality of the
entities to be mapped. The second and possibly the most
difficult problem in robotic mapping is the problem of correspondence or matching. This problem consists in determine
if two distinct sensor measurements taken at different points
in time correspond to the same physical object in the world.
The environment dynamics is the third problem, it adds other
possible interpretations of the data collected. The fourth and
final challenge is the fact that robots must choose their path
during mapping.
The joint solution (map and localization) has a important
impact on robotics applications referring to the to the
robot’s autonomous capability. Considering this, the
mapping problems are now discussed together with the
location in SLAM problems.
c) Mapping:
Metric Mapping: Within the current metric representation
of the environment, the map can be constructed by a grid of
occupation or a map of features.
•
•
Grid of Occupation: This approach consists of using
a range sensor, that returns the relative distance to an
obstacle within a closed environment. Each collected
measure brings an associated information related to the
existence or not of obstacles in a certain region of the
environment. These information are then projected onto
a horizontal map taking in consideration the knowledge
of the robot’s pose. Thus, each area of the real environment would correspond to an area of the map. The map,
in turn, is summarized to a matrix whose cells store the
probability of the region is empty or occupied [5].
Map of Features: Another family of mapping algorithms approaches the problem of building maps composed of basic geometric shapes or objects such as lines,
walls etc [9].
Topological Mapping: It consists of simple memorization of the sensorial information while the robot navigates
through the environment, so that it is possible to identify situations where the robot has already been, simply comparing
the current sensorial information with those already recorded
[8].
•
Growing Cell Structures: Proposed by [6] to solve
the main problem of the Kohonen networks [3], its
fixed topology, once you know the optimal size of
the network is not always possible [6]. Without that
information the capacity of the net can be limited. That
type of SOM with variable topology allows the network
to grow starting from a minimum n-dimensional initial
topology.
Another characteristic of the method is the degrees of trust
of the sensors, which are directly related to the estimated
robot’s location, for example sensors more reliable overcome
when compared with sensors of low trust level. The spaces
(maps) responsible for storing the information of the sensors
and of the fusion are:
•
•
Sensorial Space.
Localization Space.
B. Sensorial Fusion
Method used to integrate data from different sensors in
a robot. The question of how best to integrate data from
multiple sensors is particularly interesting when they may
have different perceptions of the captured environment, increasing the characterization of the environment. Currently
the two most used techniques for solving the problem are:
Kalman Filter [2] and Particle Filter [8].
The difficulty with this approach is to establish a representation that allows a uniform response by different sensors.
According to [4] we can extract two problems of sensorial
fusion:
•
•
Problem of stagnation: when to merge two not null
sensorial responses, we find a null result.
Problem of the infinite cycles: result of the robot’s
limitations in obtaining the appropriate information of
the environment and respond quickly to its changes.
II. P ROPOSED M ETHOD
The proposed approach aims at creating a method for
simultaneous localization and mapping with the use of sensorial fusion and growing cell structures, for this, each sensor
has its own map (sensory space) and is created through an
integrating function, a general map called localization space,
is shown in Figure 1.
A. Sensorial Space
Sensorial space is defined as a graph whose nodes
represent, along the time, the sensorial perceptions through
descriptors of dimension "m", where only with the use of
its information (without position values), the identification
is possible.
Construction of Maps:
The maps of the proposed approach are constructed
through growing cell structures [6][7], with a triangular
topology, since we face the problem of a 2D way. Each cell
of that map has a neighborhood given by the connections of
the own cell. In submitting an entry is chosen the winner
neuron (BMU), and he and his neighborhood are adjusted.
If the Euclidean distance between the BMU and the input is
larger than a limit set by the user, a cell is created, Figure
2, being connected to BMU and his closest neighbor, thus
ensuring the triangular topology of the network.
Fig. 2. Illustration of a cell insertion: A new cell and its connections is
inserted at each step, the inserted cell is in gray and the winner cell in red.
In the proposed approach there is a limitation in the size
of the map. In other words, at certain point, cells will be
excluded and if the cells of their neighborhood, after the
removal, are left with only one connection, they will also
be removed. Therefore, the fundamental triangular topology
won’t be maintained, as shown in Figure 3. A more detailed
algorithm of Growing Cell Structures can be found in [1].
Fig. 3. Illustration of a cell removal: The cell A is removed. Cells B and C
are in the neighborhood of A and are left behind with only one connection,
for the removal of the five connections of A, then B and C should also be
removed.
Fig. 1.
System Example.
B. Localization Space
Graph whose nodes represent the robot’s position already
fused. This space is characterized by its interconnection with
all the nodes of the sensorial maps that have impressions
captured in the same analyzed time, as shown in Figure 4.
where this is made to find out the displacement of all
maps.
c) Then the points are moved according to their
degrees of trust (adjust the sensorial space):
Px (t + 1) = Px (t) + 4x(1 − z);
Py (t + 1) = Py (t) + 4y(1 − z);
3) After the displacement of all the presented points (in
their respective maps of the sensorial space), already
corrected, we start again the training in sensorial space,
in which only the neighborhood of these points is
updated.
D. Flowchart of the System
Fig. 4.
Connections of the Localization Space.
C. Integration Function
The main purpose of the integration function is to correct
the robot’s position in the localization space, although it also
adjusts the local maps of the sensors. We can describe it in
the following steps:
1) The different sensorial perceptions (values captured by
the sensor in the same instant) are presented to their
local maps (training of the sensor space), for example
in sensors of vision, a set of descriptors captured in
one frame.
2) After these informations are obtained, a series of
metrics are applied to calibrate (correct) these values.
This phase is called global training in the localization
space:
a) First we calculate the central location to all maps
according to the central position of the other
sensors, associated with their trust. After this we
find the resulting central position (adjust of the
robot’s location):
P
Zi ∗Xic
XcR = P
Zi ;
YcR =
P
Zi ∗Yic
P
Zi ;
where xic and yic is the central position of each sensor
impression and zi are their degree of pertinence.
b) After having that resulting central position, the
displacement of the points is calculated:
4x = Xc R − Xi c;
4y = Yc R − Yi c;
The Figure 5 illustrates the general operation of the
system.
1) Acquisition of sensorial data (the robot’s position +
descriptors).
2) Separate momentarily the descriptors and presents
them to the local map (sensorial space).
3) The local map is updated (GCS).
4) Their data (the robot’s position + descriptors + trust
of the sensor) are sent to the global map (localization
space).
5) The adjustment of the robot’s position occurs, but also
happens a feedback for the local maps of the sensorial
space to be adjusted.
6) In the sensor space, the neighborhood of the points
already presented is updated.
Fig. 5.
Flowchart of the Proposed Method.
III. T ESTS AND R ESULTS
Due to the need of validation of the proposed system,
we looked for databases used as test pattern for SLAM.
Among the surveyed, we found a database provided by the
University of Malaga [11]. The available data counts with
cameras, lasers and a GPS (used as reference). Among the
various data sets available in the chosen repository to test the
proposed approach, the choice was the Malaga6LParking, the
Figure 6 shows the path traveled by the vehicle, in which the
path in green represents the values captured by the sensors
and in red the passage where there was no sensorial capture.
Fig. 7.
Fig. 6.
Accumulated sensors error without the proposed approach
Vehicle trajectory.
A. Rate of compression of the system
The experimental results began with compression rate. The
Table I shows the number of readings presented to each
sensor and the number of final nodes after all values been
presented. This compression has a direct influence in the
performance of the system, since the navigated environments
can be represent with a few nodes, but in order to maintain
the robustness of the system, we have the performance gain
by not having to manipulate a large volume of data. Another
feature shown is the capacity of the map described with the
laser to represent the same environment as the visual sensor
but with fewer nodes.
TABLE I
N UMBER OF NODES IN EACH MAP AND RATE OF DATA COMPRESSION .
Sensor
Laser
Camera
Presented data
7399
3028
Number of nodes
195
201
Rate of compression
97.365%
93.362%
Fig. 8.
Accumulated sensors error with the proposed approach.
TABLE II
G AINS OF THE METHOD IN RELATION TO SIMPLE MAPS
Sensor
Laser
Camera
Error without fusion
500
650
Error with fusion
340
550
Gain
32%
15.38%
B. Accumulated System Errors
1) Accumulated sensors error with the approach without
fusion: The Figure 7 illustrates the accumulated error of each
sensor with their separate maps, in other words, the use of
common growing cell structures. The error shown in red is
related to the camera and in blue is related to the laser. Thus,
the laser sensor is considered more reliable than the camera,
receiving a greater trust degree.
2) Accumulated sensors error with the proposed approach: The Figure 8 illustrates the accumulated error of
each sensor after the use of sensor fusion proposal. It may be
noted that the accumulated error behaves as a linear function
and the method only helps alleviate the slope of these straight
lines, in that way, with the use of the method it is possible to
reduce the speed at which this error is added to the robot, thus
generating a substantial gain of location for it. The Table II
illustrates the gain of each sensor with the proposed method.
C. Results of the Simulator
As presented in section 3, the data set used was the Malaga
6L. So the interface with this data can be seen in Figure 9
and 10. The interface was developed with OpenGL and the
main features are rotation and translation at three axis. The
interface allows a complete navigation of maps built.
D. Performance issues
System performance was shown satisfactory and clearly
proved the quadratic characteristic of the proposed approach.
Figure 11 demonstrates the accumulated processing time of
the system for the test data, this curve clearly describes
a quadratic function, thus confirming that the proposed
algorithm just multiplies constant factors by the complexity
of the based algorithm, not altering its order.
Fig. 9.
Simulator with two sensors
the processing power embedded in robots may be limited.
Another important characteristic of the proposed method is
its robustness to sensorial inaccuracies, since the method
by fusion of the captured sensors, makes it in a way that
sensors with a higher degree of uncertainty has attenuated
importance and more precise sensors become more prevalent.
The results obtained in the tests were satisfactory, validating
the proposal. We also notice the gain of accuracy in the
maps of sensory space with values that demonstrate the
effectiveness of the set of tests. Another striking feature
of the tests was the level of data compression, over ninety
percent, achieved by the proposed method.
R EFERENCES
Fig. 10.
Simulator with two sensors in different perspective.
Fig. 11.
Complexity curve of the system in tests.
IV. C ONCLUSIONS
The proposed work presented a new method for Simultaneous Localization and Mapping (SLAM) with the use
of GCS for sensorial fusion, in order to gain precision in
position estimation of autonomous robots. The topological
mapping based on Growing Cell Structures (GCS), although
not widespread, is characterized by low computational cost
and a simplified manner of construction. Therefore, the
same one has its use indicated in real robots, because
[1] Hodge, J. V., Austin, J.: Hierarchical Growing Cell Structures:
TreeGCS. IEEE Transactions on knowledge and data engineering. 207–
218 (2001)
[2] Kalman, R. E: A new approach to linear filtering and predictive
problems. Transactions ASME, Journal of basic engineering,(1960)
[3] Kohonen, T.: Self-organized formation of topologically correct feature
maps. Biological Cybernetics, 59–69 (1982)
[4] Anderson, T. L., Donath, M.: Animal Behavior as a Paradigm for
Developing Robot Autonomy. Robotics and Autonomous Systems, 145–
168, (1990)
[5] Elfes, A.: Using Occupancy Grids for Mobile Robot Perception and
Navigation. IEEE Computer Society Press, volume 22, number 6 ,46–
57 (1989)
[6] Fritzke, B.: Growing cell structures - a self-organizing network for
unsupervised and supervised learning. Technical report-University of
California, (1993)
[7] Fritzke, B.: A growing neural gas network learns topologies.. Advances
in Neural Information Processing Systems 7, 625–632, (1995)
[8] Thrun, S. W., Burgard Fox, D.: Probabilistic Robotics. MIT Press,
(2005)
[9] Chatila, C., Laumond J. P.: Position referencing and consistent world
modeling for mobile robots. IEEE International Conference on Robotics
and Automation, 138–143, (1985)
[10] H. F. Durrant-whyte, J. J. Leonard: Mobile robot localization by tracking geometric beacons. IEEE Transactions on Robotics and Automation,
376–382, (1991)
[11] Blanco, J.L., Moreno, F.A, Gonzalez, J.: A Collection of Outdoor
Robotic Datasets with centimeter-accuracy Ground Truth. Autonomous
Robots, (2009)
[12] Xiaochuan Zhao, Qingsheng Luo, Baoling Han : Survey on Robot
Multi-sensor Information fusion Technology. World Congress on Intelligent Control and Automation, (2008)
[13] Zhou Hong-bin: Multi-sensor Information Fusion Method Based on
the Neural Network Algorithm. International Conference on Natural
Computation, (2009)
[14] Chauhan, S., Patil, C., Halder, A., Sinha, M.: FLIER: A Novel Sensor
Fusion Algorithm. Industrial and Information Systems, (2008)
[15] Garhwal, R. , Halder, A., and Sinha, M.: An Adaptive Fuzzy State
Noise Driven Extended Kalman Filter for Real Time Orbit Determination. 58th International Astronautical Congress, (2007)