%\newcommand{\CLASSINPUTbaselinestretch}{1.0} % baselinestretch
%\newcommand{\CLASSINPUTinnersidemargin}{1in} % inner side margin
%\newcommand{\CLASSINPUToutersidemargin}{1in} % outer side margin
%\newcommand{\CLASSINPUTtoptextmargin}{1in} % top text margin
%\newcommand{\CLASSINPUTbottomtextmargin}{1in}% bottom text margin
\documentclass[10pt,journal,compsoc]{IEEEtran}
\ifCLASSOPTIONcompsoc
\usepackage[nocompress]{cite}
\else
\usepackage{cite}
\fi
\usepackage{ragged2e}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\graphicspath{ {images/} }
%\usepackage{algorithmic}
% algorithmic.sty was written by Peter Williams and Rogerio Brito.
% This package provides an algorithmic environment fo describing algorithms.
% You can use the algorithmic environment in-text or within a figure
% environment to provide for a floating algorithm. Do NOT use the algorithm
% floating environment provided by algorithm.sty (by the same authors) or
% algorithm2e.sty (by Christophe Fiorio) as the IEEE does not use dedicated
% algorithm float types and packages that provide these will not provide
% correct IEEE style captions. The latest version and documentation of
% algorithmic.sty can be obtained at:
% http://www.ctan.org/pkg/algorithms
% Also of interest may be the (relatively newer and more customizable)
% algorithmicx.sty package by Szasz Janos:
% http://www.ctan.org/pkg/algorithmicx
\usepackage{array}
\usepackage{mdwmath}
\usepackage{mdwtab}
\usepackage{eqparbox}
\ifCLASSOPTIONcompsoc
\usepackage[caption=false,font=footnotesize,labelfont=sf,textfont=sf]{subfig}
\else
\usepackage[caption=false,font=footnotesize]{subfig}
\fi
\usepackage{stfloats}
% stfloats.sty was written by Sigitas Tolusis. This package gives LaTeX2e
% the ability to do double column floats at the bottom of the page as well
% as the top. (e.g., "\begin{figure*}[!b]" is not normally possible in
% LaTeX2e). It also provides a command:
%\fnbelowfloat
% to enable the placement of footnotes below bottom floats (the standard
% LaTeX2e kernel puts them above bottom floats). This is an invasive package
% which rewrites many portions of the LaTeX2e float routines. It may not work
% with other packages that modify the LaTeX2e float routines. The latest
% version and documentation can be obtained at:
% http://www.ctan.org/pkg/stfloats
% Do not use the stfloats baselinefloat ability as the IEEE does not allow
% \baselineskip to stretch. Authors submitting work to the IEEE should note
% that the IEEE rarely uses double column equations and that authors should try
% to avoid such use. Do not be tempted to use the cuted.sty or midfloat.sty
% packages (also by Sigitas Tolusis) as the IEEE does not format its papers in
% such ways.
% Do not attempt to use stfloats with fixltx2e as they are incompatible.
% Instead, use Morten Hogholm'a dblfloatfix which combines the features
% of both fixltx2e and stfloats:
%
% \usepackage{dblfloatfix}
% The latest version can be found at:
% http://www.ctan.org/pkg/dblfloatfix
%\ifCLASSOPTIONcaptionsoff
% \usepackage[nomarkers]{endfloat}
% \let\MYoriglatexcaption\caption
% \renewcommand{\caption}[2][\relax]{\MYoriglatexcaption[#2]{#2}}
%\fi
% endfloat.sty was written by James Darrell McCauley, Jeff Goldberg and
% Axel Sommerfeldt. This package may be useful when used in conjunction with
% IEEEtran.cls' captionsoff option. Some IEEE journals/societies require that
% submissions have lists of figures/tables at the end of the paper and that
% figures/tables without any captions are placed on a page by themselves at
% the end of the document. If needed, the draftcls IEEEtran class option or
% \CLASSINPUTbaselinestretch interface can be used to increase the line
% spacing as well. Be sure and use the nomarkers option of endfloat to
% prevent endfloat from "marking" where the figures would have been placed
% in the text. The two hack lines of code above are a slight modification of
% that suggested by in the endfloat docs (section 8.4.1) to ensure that
% the full captions always appear in the list of figures/tables - even if
% the user used the short optional argument of \caption[]{}.
% IEEE papers do not typically make use of \caption[]'s optional argument,
% so this should not be an issue. A similar trick can be used to disable
% captions of packages such as subfig.sty that lack options to turn off
% the subcaptions:
% For subfig.sty:
% \let\MYorigsubfloat\subfloat
% \renewcommand{\subfloat}[2][\relax]{\MYorigsubfloat[]{#2}}
% However, the above trick will not work if both optional arguments of
% the \subfloat command are used. Furthermore, there needs to be a
% description of each subfigure *somewhere* and endfloat does not add
% subfigure captions to its list of figures. Thus, the best approach is to
% avoid the use of subfigure captions (many IEEE journals avoid them anyway)
% and instead reference/explain all the subfigures within the main caption.
% The latest version of endfloat.sty and its documentation can obtained at:
% http://www.ctan.org/pkg/endfloat
%
% The IEEEtran \ifCLASSOPTIONcaptionsoff conditional can also be used
% later in the document, say, to conditionally put the References on a
% page by themselves.
\usepackage{url}
% NOTE: PDF thumbnail features are not required in IEEE papers
% and their use requires extra complexity and work.
%\ifCLASSINFOpdf
% \usepackage[pdftex]{thumbpdf}
%\else
% \usepackage[dvips]{thumbpdf}
%\fi
% thumbpdf.sty and its companion Perl utility were written by Heiko Oberdiek.
% It allows the user a way to produce PDF documents that contain fancy
% thumbnail images of each of the pages (which tools like acrobat reader can
% utilize). This is possible even when using dvi->ps->pdf workflow if the
% correct thumbpdf driver options are used. thumbpdf.sty incorporates the
% file containing the PDF thumbnail information (filename.tpm is used with
% dvips, filename.tpt is used with pdftex, where filename is the base name of
% your tex document) into the final ps or pdf output document. An external
% utility, the thumbpdf *Perl script* is needed to make these .tpm or .tpt
% thumbnail files from a .ps or .pdf version of the document (which obviously
% does not yet contain pdf thumbnails). Thus, one does a:
%
% thumbpdf filename.pdf
%
% to make a filename.tpt, and:
%
% thumbpdf --mode dvips filename.ps
%
% to make a filename.tpm which will then be loaded into the document by
% thumbpdf.sty the NEXT time the document is compiled (by pdflatex or
% latex->dvips->ps2pdf). Users must be careful to regenerate the .tpt and/or
% .tpm files if the main document changes and then to recompile the
% document to incorporate the revised thumbnails to ensure that thumbnails
% match the actual pages. It is easy to forget to do this!
%
% Unix systems come with a Perl interpreter. However, MS Windows users
% will usually have to install a Perl interpreter so that the thumbpdf
% script can be run. The Ghostscript PS/PDF interpreter is also required.
% See the thumbpdf docs for details. The latest version and documentation
% can be obtained at.
% http://www.ctan.org/pkg/thumbpdf
\newcommand\MYhyperrefoptions{bookmarks=true,bookmarksnumbered=true,
pdfpagemode={UseOutlines},plainpages=false,pdfpagelabels=true,
colorlinks=true,linkcolor={black},citecolor={black},urlcolor={black},
pdftitle={EKF-Based Loaclization with LRF},
pdfsubject={Typesetting},
pdfauthor={Isabel Silva, Renato Silva, Joao Rosa, Vitor Alveirinho, Jose Mendes, Maciej Przydatek},
pdfkeywords={EKF, LRF, Localization}}
%\ifCLASSINFOpdf
%\usepackage[\MYhyperrefoptions,pdftex]{hyperref}
%\else
%\usepackage[\MYhyperrefoptions,breaklinks=true,dvips]{hyperref}
%\usepackage{breakurl}
%\fi
% One significant drawback of using hyperref under DVI output is that the
% LaTeX compiler cannot break URLs across lines or pages as can be done
% under pdfLaTeX's PDF output via the hyperref pdftex driver. This is
% probably the single most important capability distinction between the
% DVI and PDF output. Perhaps surprisingly, all the other PDF features
% (PDF bookmarks, thumbnails, etc.) can be preserved in
% .tex->.dvi->.ps->.pdf workflow if the respective packages/scripts are
% loaded/invoked with the correct driver options (dvips, etc.).
% As most IEEE papers use URLs sparingly (mainly in the references), this
% may not be as big an issue as with other publications.
%
% That said, Vilar Camara Neto created his breakurl.sty package which
% permits hyperref to easily break URLs even in dvi mode.
% Note that breakurl, unlike most other packages, must be loaded
% AFTER hyperref. The latest version of breakurl and its documentation can
% be obtained at:
% http://www.ctan.org/pkg/breakurl
% breakurl.sty is not for use under pdflatex pdf mode.
%
% The advanced features offer by hyperref.sty are not required for IEEE
% submission, so users should weigh these features against the added
% complexity of use.
% The package options above demonstrate how to enable PDF bookmarks
% (a type of table of contents viewable in Acrobat Reader) as well as
% PDF document information (title, subject, author and keywords) that is
% viewable in Acrobat reader's Document_Properties menu. PDF document
% information is also used extensively to automate the cataloging of PDF
% documents. The above set of options ensures that hyperlinks will not be
% colored in the text and thus will not be visible in the printed page,
% but will be active on "mouse over". USING COLORS OR OTHER HIGHLIGHTING
% OF HYPERLINKS CAN RESULT IN DOCUMENT REJECTION BY THE IEEE, especially if
% these appear on the "printed" page. IF IN DOUBT, ASK THE RELEVANT
% SUBMISSION EDITOR. You may need to add the option hypertexnames=false if
% you used duplicate equation numbers, etc., but this should not be needed
% in normal IEEE work.
% The latest version of hyperref and its documentation can be obtained at:
% http://www.ctan.org/pkg/hyperref
% *** Do not adjust lengths that control margins, column widths, etc. ***
% *** Do not use packages that alter fonts (such as pslatex). ***
% There should be no need to do such things with IEEEtran.cls V1.6 and later.
% (Unless specifically asked to do so by the journal or conference you plan
% to submit to, of course. )
% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}
\begin{document}
\title{EKF-BASED LOCALIZATION WITH LRF}
\author{Group 9 - Isabel Silva (73169), Renato Silva (73279), Jo\~{a}o Rosa (74149), V\'{i}tor Alveirinho (77003), Jos\'{e} Mendes (82258), Maciej Przydatek (83022)}
% The paper headers
\markboth{Instituto Superior T\'{e}cnico, Autonomous Systems, Project Report, December~2015}%
{Shell \MakeLowercase{\textit{et al.}}: EKF-Based Localization with LRF}
\IEEEtitleabstractindextext{
\justify
\begin{abstract}
The goal of this project is to explore both the theory behind the Extended Kalman Filter and the way it was used to localize a four-wheeled mobile-robot. This can be achieved by estimating in real-time the pose of the robot, while using a pre-acquired map through Laser Range Finder (LRF). The LRF is used to scan the environment, which is represented through line segments. Through a prediction step, the robot simulates its kinematic model to predict his current position. In order to minimize the difference between the matched lines from the global and local maps, a update step is implemented. It should be noted that every measurement has associated uncertainty that needs to be taken into account when performing each step of the Extended Kalman Filter. These uncertainties, or noise, are described by covariance matrices that play a very important role in the algorithm. Since we are dealing with an indoor structured environment, mainly composed by walls and straight-edged objects, the line segment representation of the maps was the chosen method to approach the problem.
%This project deals with a problem of localization of a mobile robot in a strutured environment. We use an four-wheel robot and a laser range-finder (LRF). We explore both the theory behind the Extended Kalman Filter (EKF) and the way it was implemented.
%The LRF is used to scan the environment, which is represented through line segments. Through a prediction step, the robot simulates its kinematic model to predict the position were it is. In order to minimize the difference between the matched lines from the global and local maps, a update step is implemented. It should be noted that every measurement has associated uncertainty that needs to be taken into account when performing each step of the Kalman Filter. These uncertainties, or noise, are described by covariance matrices that play a very important role in the algorithm.
\end{abstract}
\begin{IEEEkeywords}
Extended Kalman Filter, Laser Rangefinder, Localization, Mobile-Robot
\end{IEEEkeywords}}
\maketitle
% To allow for easy dual compilation without having to reenter the
% abstract/keywords data, the \IEEEtitleabstractindextext text will
% not be used in maketitle, but will appear (i.e., to be "transported")
% here as \IEEEdisplaynontitleabstractindextext when compsoc mode
% is not selected <OR> if conference mode is selected - because compsoc
% conference papers position the abstract like regular (non-compsoc)
% papers do!
\IEEEdisplaynontitleabstractindextext
% \IEEEdisplaynontitleabstractindextext has no effect when using
% compsoc under a non-conference mode.
% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle
\section{Introduction}
\label{sec: Introduction}
Localization is a fundamental problem in mobile robotics. In order for the robot to be autonomous, it needs to knows its own pose in the environment. Localizing the robot only with data provided by odometry is inaccurate since the measurement noise is constantly accumulating. Taking this into consideration, the robot will need to improve the information about its pose, which can be achieved by comparing the current environment scan with an already built global map, subsequently resulting in a better pose estimation. The environment is scanned with the aid of a Laser Rangefinder (LRF).
The global static map can be obtained through SLAM (Simultaneous Localization and Mapping), a process which plays a crucial role in the execution of the Extended Kalman Filter. This map is then subjected to a post-process that transforms all its boundaries (mainly walls) into line segments, a method which was chosen to the detriment of occupancy grids that divide the environment in cells. Since the environment is mainly composed by straight lines, the latter method would be computationally less efficient and therefore not the best option. To extract the line segments from the map, a method of Least Squares was chosen.
The Extended Kalman Filter provides optimal estimates for non-linear systems, relying on the input and output-noise covariance matrices of the process. As a matter of fact, to obtain estimations of the robot pose is to solve a non-linear problem. This localization problem can be split in two phases: the Prediction Step and the Update Step. The Prediction Step of the EKF is performed by simulating the kinematic model of the robot. The standard deviation of each of the wheels' angular speed of the robot is estimated as being proportional to the wheels' angular speed. In the Update Step of the EKF, the main objective is to correct the pose of the robot by subtracting the parameters of matching lines from local and global maps. For this we need to find the line segments from both the local and global maps, applying a matching procedure, where the most similar local and global line segments are paired if the difference is below a threshold.
The remainder of the paper is organized as follows. In Section \ref{sec: Using ROS}, we give a brief introduction on what is ROS and how it was used in the context of our project. Section \ref{sec:Localization Algorithm} describes how the implemented localization algorithm works. Experimental results of the algorithm are shown and discussed in Section \ref{sec: Experimental Results}. Finally, we state our conclusions in Section \ref{sec: Conclusion}.
% An example of a double column floating figure using two subfigures.
% (The subfig.sty package must be loaded for this to work.)
% The subfigure \label commands are set within each subfloat command,
% and the \label for the overall figure must come after \caption.
% \hfil is used as a separator to get equal spacing.
% Watch out that the combined width of all the subfigures on a
% line do not exceed the text width or a line break will occur.
%
%\begin{figure*}[!t]
%\centering
%\subfloat[Case I]{\includegraphics[width=2.5in]{box}%
%\label{fig_first_case}}
%\hfil
%\subfloat[Case II]{\includegraphics[width=2.5in]{box}%
%\label{fig_second_case}}
%\caption{Simulation results for the network.}
%\label{fig_sim}
%\end{figure*}
%
% Note that often IEEE papers with subfigures do not employ subfigure
% captions (using the optional argument to \subfloat[]), but instead will
% reference/describe all of them (a), (b), etc., within the main caption.
% Be aware that for subfig.sty to generate the (a), (b), etc., subfigure
% labels, the optional argument to \subfloat must be present. If a
% subcaption is not desired, just leave its contents blank,
% e.g., \subfloat[].
% An example of a floating table. Note that, for IEEE style tables, the
% \caption command should come BEFORE the table and, given that table
% captions serve much like titles, are usually capitalized except for words
% such as a, an, and, as, at, but, by, for, in, nor, of, on, or, the, to
% and up, which are usually not capitalized unless they are the first or
% last word of the caption. Table text will default to \footnotesize as
% the IEEE normally uses this smaller font for tables.
% The \label must come after \caption as always.
%
% Note that the IEEE does not put floats in the very first column
% - or typically anywhere on the first page for that matter. Also,
% in-text middle ("here") positioning is typically not used, but it
% is allowed and encouraged for Computer Society conferences (but
% not Computer Society journals). Most IEEE journals/conferences use
% top floats exclusively.
% Note that, LaTeX2e, unlike IEEE journals/conferences, places
% footnotes above bottom floats. This can be corrected via the
% \fnbelowfloat command of the stfloats package.
\section{Using ROS}
\label{sec: Using ROS}
The Robot Operating System (ROS) is a set of software libraries and tools that help in building robot applications. Its modularity, extensibility and standardization allow system designers to cooperate flawlessly, while working separately. It can be used in many applications starting at industrial-type robotic manipulators, going through mobile platforms, such as wheeled autonomous vehicles or aerial multicopters, and ending at social and humanoid robots.
\linebreak
\linebreak
ROS consists of 4 main elements:
\begin{itemize}
\item \textbf{Roscore} - main program, which creates basic framework common for all modules;
\item \textbf{Nodes} - modules, which perform specific tasks;
\item \textbf{Topics} - allow nodes to communicate between each other;
\item \textbf{Parameter Server} - set of values, which can be read, set or deleted by every node.
\end{itemize}
The work was conducted using Adept MobileRobots PIONEER 3-AT. In order to accomplish the task, several packages containing desired nodes were used, listed as follows:
\begin{itemize}
\item \texttt{rosaria} - written and maintained by Adept MobileRobots. Allows to establish connection between ROS and PIONEER 3-AT, send commands to wheels and read robot's state;
\item \texttt{sicktoolbox\_wrapper} - performs communication with SICK LMS200 Laser Rangefinder, providing current environment scans;
\item \texttt{gmapping} - used to implement SLAM (Simultaneous Localization and Mapping), and therefore to obtain the global static map needed for the implementation of the EKF algorithm;
\item \texttt{map\_server} - tool used to manipulate the map obtained by the \texttt{gmapping} package;
\item \texttt{rosaria\_client} and \texttt{teleop\_twist\_keyboard} - allow to easily control the robot's linear and angular velocities using the keyboard;
\item \texttt{rviz} - used to visualize the map, the robot estimated position and orientation, path and laser scans;
\item \texttt{tf} - manages multiple coordinate frames transitions during program execution.
\end{itemize}
\subsection{Obtaining the Maps}
\label{sec: Obtaining the Maps}
In order to obtain the map, four packages were used: \texttt{rosaria}, \texttt{sicktoolbox\_wrapper}, \texttt{gmapping} and \texttt{rviz}, to visualize the process. After setting up the nodes, the robot was driven over the available area. The \texttt{gmapping} package performed the SLAM process and, while generating the map, was sending it to the \texttt{map} topic. After closing the loop in the area, the finished map was saved using the \texttt{map\_server map\_saver} command-line tool. Then, it was converted to an ASCII text file with the values \(0\), representing free space , \(100\), representing occupied space and \(-1\), for unknown area, and finally loaded to the main program for feature extraction.
\subsection{Transform Configuration}
\label{sec: TF Package}
\justify
In order to visualize the final process of the robot localizing itself on a map, a \texttt{rviz} node was used. To simultaneously show the map, robot localization based on odometry, localization based on EKF and laser scans on one referential frame, a coordinate transition was implemented. This should be done using \texttt{tf} node and its \texttt{tf.TransformBroadcaster()} method. Several few transform broadcasting nodes were created, to manage such dependencies as laser-to-robot, robot-to-odometry and odometry-to-map frame transitions. An enormous advantage of the \texttt{tf} tool is the fact that the transitions have to be specified only once and then the node manages all calculations on itself.
\section{Localization Algorithm}
\label{sec:Localization Algorithm}
The system of coordinates used is illustrated in Fig. \ref{fig:refrobot}. $x_Gy_Gz_G$ forms the global referential that is equal to the referential of the robot $x_Ry_R$ in his initial position. $\textbf{x}_p(k)=[x_r(k), y_r(k), \varphi_r(k)]^T$ denotes the robot's pose with respect to the global coordinates.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{./refrobot}
\caption{Robot's pose according to the global coordinates.}
\label{fig:refrobot}
\end{figure}
\subsection{Prediction Step}
\label{sec: Prediction Step}
The robot's pose is predicted by simulating the discrete kinematic model of the robot
%
\begin{equation}
\label{eq:kinematics}
\begin{aligned}
\textbf{x}_p(k+1) & = \textbf{f}(\textbf{x}_p(k),\textbf{u}(k)): \\
x_r(k+1)&=x_r(k)+T\frac{R}{2}(\omega_R(k)+\omega_L(k))\cos(\varphi_r(k)),\\
y_r(k+1)&=y_r(k)+T\frac{R}{2}(\omega_R(k)+\omega_L(k))\sin(\varphi_r(k)), \\
\varphi_r(k+1) & =\varphi_r(k)+T\frac{R}{L}(\omega_R(k)-\omega_L(k)),
\end{aligned}
\end{equation}
%
where \textit{T} is the sampling time, \textit{R} denotes the radius of each wheel and \textit{L} denotes the distance between the two front wheels. $\textbf{u}(k)=[\omega_R(k), \omega_L(k)]^T$ is the input vector where $\omega_R(k)$ and $\omega_L(k)$ are measurements of the rotational speed of the left and right wheels, respectively, at the time $kT$.
Let $\omega_{R_c}(k)$ and $\omega_{L_c}(k)$ be the rotational speeds of the left and right wheels, which yields a correct estimation of the robot's pose $\textbf{x}_p(k)$ (Eq.\ref{eq:kinematics}). The error for the rotational speeds of the corresponding wheels $\omega_{R_n}(k)$ and $\omega_{L_n}(k)$ can then be defined as
\begin{equation}
\label{eq:omegas}
\omega_R(k)=\omega_{R_c}(k)+\omega_{R_n}(k), \ \omega_L(k)=\omega_{L_c}(k)+\omega_{L_n}(k),
\end{equation}
\begin{equation}
\label{eq:speedwheels}
\textbf{n}(k) = [\omega_{R_n}(k), \omega_{L_n}(k)]^T.
\end{equation}
The error vector $\textbf{n}(k)$ above captures the uncertainties of the odometry model and is assumed to be zero mean and Gaussian noise.
The covariance matrix of this vector is the input-noise covariance matrix $\textbf{Q}(k)$ for the EKF, defined as
\begin{equation}
\label{eq:Qmatrix}
\textbf{Q}(k)=\begin{bmatrix} \delta\omega^2_R(k) & 0 \\[0.3em] 0 & \delta\omega^2_L(k)\\[0.3em]
\end{bmatrix}.
\end{equation}
The parameter $\delta$ in the covariance matrix above was estimated experimentally, as shown in Section \ref{sec: Experimental Results}.
For each time we compute the prediction step of the EKF, we first calculate the rotational speed of the wheels $\omega_R(k-1)$ and $\omega_L(k-1)$ using (Eq. \ref{eq:kinematics}), where $\textbf{x}_p(k-1)$ and $\textbf{x}_p(k)$ are given by odometry at times $(k-1)T$ and $kT$, respectively. After this, we can use the results to compute the prediction step.
The prediction step gives us the state prediction $\tilde{\textbf{x}}_p(k)$ and the covariance matrix of the state prediction error $\tilde{\textbf{P}}(k)$.
\begin{equation}
\label{eq:prediction}
\tilde{\textbf{x}}_p(k)=\textbf{f}(\hat{\textbf{x}}_p(k-1),\textbf{u}(k-1)),
\end{equation}
$$\tilde{\textbf{P}}(k)=\textbf{A}(k)\hat{\textbf{P}}(k-1)\textbf{A}^T(k)+\textbf{W}(k)\textbf{Q}(k-1)\textbf{W}^T(k),$$
$$\textbf{A}_{ij}(k)=\left.\frac{\partial \textbf{f}_i}{\partial \hat{\textbf{x}}_{p j}(k-1)}\right|_{(\hat{\textbf{x}}_p(k-1),\textbf{u}(k-1))},$$
\begin{equation}
\label{eq:predictionmatrices}
\textbf{W}_{ij}(k)=\left.\frac{\partial \textbf{f}_i}{\partial \textbf{n}_j(k-1)}\right|_{(\hat{\textbf{x}}_p(k-1),\textbf{u}(k-1))},
\end{equation}
%
where $\hat{\textbf{x}}_p(k-1)$ denotes the state estimate at time instant $k-1$ based on all the measurements collected up at the time. $\hat{\textbf{P}}(k-1)$ is the covariance matrix of the corresponding estimation error.
\subsection{Obtaining Line Parameters}
\label{sec:lineparameters}
\subsubsection{Clustering Points of the Global Map}
\label{sec:globalmap}
In order to obtain the parameters for the lines of the global map, it is necessary to analyse the matrix obtained by SLAM and create a cluster of points for each line that is on the map. Using the Corner Harris Detection algorithm [\ref{harris}] we can detect all the corners of the map. We know that each corner belongs, at least, to one line. So, if we check the neighbourhood of the corner in all the directions, we can create a cluster of all the points that belong to one line. Each point is analysed by the distance from the previous point in the cluster to check if it is a good candidate to the line. The difference between the slope of the line composed by all the points that are already in the cluster and the slope of the line composed by the corner and the candidate point should be below a small threshold. If these conditions are true, the point is clustered.
\subsubsection{Clustering Points of the Local Map}
\label{sec:Local Map}
We need to know the lines that the robot is "seeing" in each cycle of the EKF in order to update the pose of the robot. For that, we use the data received from the LRF, which consists of a set of distances $d$ between the laser and an obstacle, for a given set of angles $\theta \in [-\frac{\pi}{2},\frac{\pi}{2}]$. We transform each pair of distance and angle to a point $(x,y)$ in the robot frame (Fig. \ref{fig:angulo}) as in
\begin{equation}
\label{eq:localobstacle}
x_i = d_i \sin(-\theta_i), \quad y_i = d_i \cos\theta_i.
\end{equation}
It is important to notice that in $y$ axis the angle $\theta$ is zero, on the positive $x$ axis $\theta = -\frac{\pi}{2}$ and in the negative $x$ axis $\theta = \frac{\pi}{2}$.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{angulo}
\caption{Relation between the distances $d_i$, angle $\theta_i$ and the point $(x_i,y_i)$, in the referential of the robot}
\label{fig:angulo}
\end{figure}
To store the points in clusters we need to analyse if the distance between two consecutive points is bellow a threshold $T_{d_{points}}$ and also if the difference between the angle of those two points and the angle of the line composed by the points that are already stored in the cluster is bellow a threshold $T_{a_{points}}$. If these two conditions are verified the point is clustered. If these conditions are not fulfilled it means that the point does not belong to the cluster and the cluster is finalized. Then a new cluster is created and we store this last point in it. When a cluster is finalized, if a number of points that belongs to it is above a threshold $n_{min}$, we create a line. If not, the cluster is ignored.
\subsubsection{Line Parameters}
\label{sub:lineparameters}
At this point we already know clusters of points $(\textbf{x},\textbf{y})$ that form lines in both global and local maps. Now we need to find the parameters of the line equation in the normal form (Fig. \ref{fig:reflinhas}), for each cluster of points.
For the local map we want to obtain the parameters $\psi_i$ and $\textit{r}_i$, for each line \textit{i}, in the robot referential
\begin{equation}
\label{eq:localline}
x_R\cos\psi_i + y_R\sin\psi_i = r_i,
\end{equation}
%
and for the global map we want to obtain the parameters $\alpha_j$ and $\textit{p}_j$, for each line \textit{j}, in the global referential.
\begin{equation}
\label{eq:globalline}
x_G\cos\alpha_j + y_G\sin\alpha_j = p_j.
\end{equation}
To find this parameters we used the LSQ method.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{./reflinhas}
\caption{The parameters $(p_j,\alpha_j)$ of the lines of the global map, according to the global coordinates. And the parameters $(r_i,\psi_i)$ of the lines of the local map according to the robot coordinates.}
\label{fig:reflinhas}
\end{figure}
\subsubsection*{Least Squares Method}
We will be explaining this method for the parameters $\psi$ and $\textit{r}$, but all the parameters stated in (Eq.\ref{eq:localline} and \ref{eq:globalline}) can be obtained in the same way.
If a cluster of points belongs to a vertical line we will not be able to apply this method since the value of the slope of the line will be infinite. To prevent this, we first calculate the approximated value of the slope for each line using the first and last points of the cluster. If the absolute value of the slope is bigger than 1, then we apply a rotation of $-\frac{\pi}{2}$ to all the points of the cluster. This is done by exchanging the vector \textbf{x} with the vector \textbf{y} and \textbf{y} with the vector -\textbf{x}. In the end of the method we need to apply a rotation of $\frac{\pi}{2}$ to $\psi$ to obtain the correct line parameters.
We then apply the following equations,
\begin{equation}
\label{eq:U}
\resizebox{1.0\hsize}{!}{ \begin{array}[t]{cc} \hat{\theta}=[\hat{k}_l,\hat{c}_l]^T=(\textbf{U}^T \textbf{U})^{-1} \textbf{U}^T \textbf{y}, \quad
\textbf{U}=\begin{bmatrix}
\textbf{x}(1) & \cdots & \textbf{x}(n) \\
1 & \cdots & 1 \\
\end{bmatrix}^T,\\
\\
\textbf{y}=[\textbf{y}(1),\cdots,\textbf{y}(n)]^T,
\end{array}
}
\end{equation}
%
where $n$ is the number of points of the cluster; $\hat{h}_l$ and $\hat{c}_l$ correspond to the line parameters of the line equation in the form $y=k_lx+c_l$. Now we obtain $\psi$ and $\textit{r}$ with
\begin{equation}
\label{eq:r}
r(\hat{k}_l,\hat{c}_l) = \frac{\hat{c}_l}{\sqrt{\hat{k}_l^2 +1}}sign(\hat{c}_l),
\end{equation}
\begin{equation}
\label{eq:psi}
\psi (\hat{k}_l) = \arctan2 \left(\frac{sign(\hat{c}_l)}{\sqrt{\hat{k}_l^2 +1}},\frac{- \hat{k}_l}{\sqrt{\hat{k}_l^2 +1}}sign(\hat{c}_l)\right),
\end{equation}
\subsubsection*{Estimation of Line Parameters Covariances}
For the update step of the EKF we will need to compute the covariance matrix $R_i$ that contains the variance of the line parameters $r_i$ and $\psi_i$ and the covariances between them.
\begin{equation}
\label{eq:Ce}
\textbf{Ce}=var(\textbf{y}(j))(\textbf{U}^T \textbf{U})^{-1}=\begin{bmatrix}
var(\hat{k}_l) & cov(\hat{k}_l,\hat{c}_l) \\
cov(\hat{k}_l,\hat{c}_l) & var(\hat{c}_l) \\
\end{bmatrix},
\end{equation}
\begin{equation}
\label{eq:vary}
var(\textbf{y}(j))=\frac{\sum_{j=1}^{n} (\textbf{y}(j)-\hat{\textbf{y}}(j))^2}{n-1}, \hat{\textbf{y}}(j)=\hat{k}_l \cdot \textbf{x}(j)+\hat{c}_l
\end{equation}
%
where $var(\textbf{y}(i))$ is the vertical error variance of the line-segment points (\textbf{x}(j),\textbf{y}(j)) (\textit{j}=1,...,\textit{n}) according to the estimated line with the parameters $\hat{k}_l$ and $\hat{c}_l$. Knowing the variances and covariances between the parameters $\hat{k}_l$ and $\hat{c}_l$ (Eq. \ref{eq:Ce}), we can now calculate the variances and covariances between the parameters \textit{r} and $\psi$ as follows.
\begin{equation}
\label{eq:varparameters}
\begin{aligned}
var(\psi) & = K_{\psi k}^2 var(\hat{k}_l),\\
var(r) & = K_{r k}^2 var(\hat{k}_l)+K_{r c}^2 var(\hat{c}_l)+2 K_{r k}K_{r c} \cdot cov(\hat{k}_l,\hat{c}_l),\\
cov(r,\psi) & = K_{r k} K_{\psi k} var(\hat{k}_l) + K_{r c} K_{\psi k} \cdot cov(\hat{k}_l,\hat{c}_l), \\
cov(\psi ,r) & = cov(r, \psi),
\end{aligned}
\end{equation}
where
\begin{equation}
\label{eq:ks}
\resizebox{1.0\hsize}{!}{ \begin{array}[t]{lcl} K_{r k}=\frac{-\hat{c}_l\hat{k}_l}{\sqrt[]{\hat{k}_l^2+1}(\hat{k}_l^2+1)}sign(\hat{c}_l), \quad K_{r c}=\frac{sign(\hat{c}_l)}{\sqrt[]{\hat{k}_l^2+1}}, \quad K_{\psi k}=\frac{1}{\hat{c}_l^2+1}.
\end{array}
}
\end{equation}
\subsection{Matching Line Segments}
\label{sec:matching}
The parameters of the lines of the global map, $\alpha$ and \textit{p} (Eq. \ref{eq:globalline}), are all collected in a vector \textbf{G}
\begin{equation}
\label{eq:G}
\textbf{G} = [p_1, \alpha_1, ... , , p_{n_G}, \alpha_{n_G}]^T,
\end{equation}
where $n_G$ is the number of lines of the global map. The vector \textbf{G} is computed once and is always the same since the global map is static.
The parameters of the lines of the local map, $\psi$ and $r$ (Eq. \ref{eq:localline}), are all collected in the vector $\textbf{z},(k)$ (Eq. \ref{eq:z}), which are based on the current LRF scan,
\begin{equation}
\label{eq:z}
\textbf{z}(k) = [r_1, \psi_1, ... , , r_N, \psi_N]^T.
\end{equation}
The robot's pose correction is based on the difference between the line parameters of the local environment map and the line parameters of the global map, transformed into the robot's coordinates. This transformation is given by
\begin{equation}
C_j= p_j - \tilde{x}_r(k)\cos \alpha_j - \tilde{y}_r(k)\sin \alpha_j,
\end{equation}
\begin{equation}
\label{eq:transformation}
\begin{aligned}
\begin{bmatrix} \hat{r}_i\\[0.3em] \hat{\psi}_i\\[0.3em]\end{bmatrix}&=\mu_i(\tilde{\textbf{x}}_p(k),p_j,\alpha_j)\\
&=\begin{bmatrix} |C_j|\\[0.3em] \alpha - (\hat{\varphi}_r - \frac{\pi}{2})+(-0.5 sign(C_j) + 0.5)\pi\\[0.3em]
\end{bmatrix},
\end{aligned}
\end{equation}
%
where $\tilde{\textbf{x}}_p$ (Eq.\ref{eq:prediction}) denotes the prediction of the robot's pose. The observation model can then be defined by the vector
\begin{equation}
\label{eq:miu}
\mu(\tilde{\textbf{x}}_p(k)) = [\mu_1(\tilde{\textbf{x}}_p(k),p_1,\alpha_1)^T, ... , \mu_{n_G}(\tilde{\textbf{x}}_p(k),p_{n_G},\alpha_{n_G})^T]^T
\end{equation}
For each cycle we then want to find the lines of the global map that correspond to each line of the local map. For this we need to find the line of the global map (transformed into the robot coordinates) that is closest to each line of the local map. We first obtain a point in Cartesian coordinates for each line. For the lines of the global map we use Eq.\ref{eq:linepointsglobal} and for the local lines we use Eq.\ref{eq:linepointslocal}, for $j=1,...,n_G$ and $i=1,...,N$.
\begin{equation}
\label{eq:linepointsglobal}
x_{\mu_j}=\cos \hat{\psi}_j \hat{r}_j, y_{\mu_j}=\sin \hat{\psi}_j \hat{r}_j,
\end{equation}
\begin{equation}
\label{eq:linepointslocal}
x_{z_i}=\cos \psi_i r_i, y_{z_i}=\sin \psi_i r_i
\end{equation}
With this we compute a matrix $v(k) \in \mathbb{R}^{2n_G \times N}$
\begin{equation}
\label{eq:vij}
v_{ij}(k) = \sqrt[]{(x_{z_i}-x_{\mu_j})^2+(y_{z_i}-y_{\mu_j})^2}.
\end{equation}
For each column of $v(k)$ we find the line that contains the minimum value, making it possible to obtain pairs of lines: each local line has an associated global line. With this pairs we compute a matrix $V(k)$
\begin{equation}
\label{eq:V}
V(k) = \begin{bmatrix}
r_1 - \hat{r}_{min_1}\\
\psi_1 - \hat{\psi}_{min_1}\\
\vdots \\
r_N - \hat{r}_{min_N}\\
\psi_N - \hat{\psi}_{min_N}\\
\end{bmatrix}.
\end{equation}
We also compute a matrix $\mu_{minim}(\tilde{\textbf{x}}_p(k))$
\begin{equation}
\label{eq:mumin}
\mu_{minim}(\tilde{\textbf{x}}_p(k))=[\hat{r}_{min_1}, \hat{\psi}_{min_1}, ... , \hat{r}_{min_N}, \hat{\psi}_{min_N}]^T.
\end{equation}
\subsubsection*{Matching Step}
We now have $N$ pairs of lines but we have to analyse which pairs are valid. This is called matching and is necessary because some lines may correspond to moving objects of the environment. Since our map is composed only by static objects, we are going to find a pair but not a match, because the lines from the local map do not really correspond to the paired lines from the global map.
The valid pairs can be found by applying
$$r_i - \hat{r}_{min_i} < T_r$$
\begin{equation}
\label{eq:matchthreshold}
and
\end{equation}
$$\psi_i - \hat{\psi}_{min_i} < T_{\psi}$$
All the differences between pairs of lines that were above the threshold are discarded. We then compute a new matrix $V_{match}(k)$ that is very similar to the matrix in Eq. \ref{eq:V} but contains only the differences between the line segments that passed the matching step.
We also compute a new matrix $\mu_{match}(\tilde{\textbf{x}}_p(k))$ that is very similar to the matrix in Eq. \ref{eq:mumin} but contains only the parameters of lines that were matched.
\subsection{Update Step}
\label{sec: Update Step}
At this point we only need to compute the update step
$$\hat{\textbf{x}}_p(k)=\tilde{\textbf{x}}_p(k) + \textbf{K}(k)\textbf{V}_{match}(k),$$
$$\hat{\textbf{P}}(k)=\tilde{\textbf{P}}(k) - \textbf{K}(k)\textbf{H}(k)\tilde{\textbf{P}}(k), $$
\begin{equation}
\label{eq:update}\textbf{K}(k) = \tilde{\textbf{P}}(k)\textbf{H}^T(k)(\textbf{H}(k)\tilde{\textbf{P}}(k)\textbf{H}^T(k)+\textbf{R}(k))^{-1},
\end{equation}
$$\textbf{H}_{ij}(k)=\left.\frac{\partial \mu_{{match}_i}}{\partial \tilde{\textbf{x}}_{p j}(k)}\right|_{\tilde{\textbf{x}}_p(k)},$$
%
where $\hat{\textbf{x}}_p(k)$ denotes the state update; $\hat{\textbf{P}}(k)$ denotes the covariance matrix of the state update error; $\textbf{K}(k)$ is the Kalman gain; and $\textbf{R}(k)$ is the output-noise covariance matrix associated to the vector $\textbf{z}(k)$. $\textbf{R}(k)$ has a block-diagonal structure, where the \textit{i-th} block is given by
\begin{equation}
\label{eq:R}
\textbf{R}_i(k) = \begin{bmatrix}
var(r_i) & cov(r_i,\psi_i)\\
cov(\psi_i, r_i) & var(\psi_i)\\
\end{bmatrix}
\end{equation}
%
with $var(r_i),\ cov(r_i,\psi_i),\ cov(\psi_i, r_i)$ and $var(\psi_i)$ defined in Eq. \ref{eq:varparameters}.
\section{Experimental Results}
\label{sec: Experimental Results}
After the algorithm was implemented, we defined the sampling time as $T = 100ms$, the distance between two opposite wheels as $L = 0.400m$, the radius of the wheels as $R = 0.109m$ and $\delta$ = 0.01, in order to start testing the performance of the EKF. The threshold values used to perform clustering were $T_{d_{points}} = 0.15m$ and $T_{a_{points}} = 0.30rad$. These two parameters were estimated based on visualizations of the data received from the Laser Rangefinder SICKLMS200. To obtain line-segments from the clusters, the minimum points required were $n_{min} = 4$. For the initial position of the robot, the pose is the origin of the referential and the orientation is $\psi = \frac{\pi}{2}$.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{./map3}
\caption{Global map of the fifth floor of the North Tower, obtained through the SLAM package}
\label{fig:map3}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{./lowmatch}
\caption{Representation of the trajectory made by the robot. The points in blue are the values obtained with odometry. The points in green are the values obtained with EKF, with small number of matched lines.}
\label{fig:map4}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{./highmatch}
\caption{Representation of the trajectory made by the robot. The points in blue are the values obtained with odometry. The points in green are the values obtained with EKF, with a representative number of matched lines.}
\label{fig:map5}
\end{figure}
The first step in a localization problem is to create a visual representation of the environment where the robot is in. For that, it was created a map using the SLAM package, represented in Fig. \ref{fig:map3}.
To analyse the trajectory of the robot using EKF, we did two experiments. The first one with a small number of matches and the second one with a considerable number of matches.
For the first experiment, as we can see in Fig. \ref{fig:map4}, the matching process did some small corrections, but the number of matched lines was rather small. In the cases where there are no matched lines, the value of the state update is the same as the state prediction. So, in this case, the update pose was close to the odometry values. It should be noted that we are in presence of a case where the odometry values have a low error. In the case of Fig. \ref{fig:map4}, the covariance matrix stays almost static, which is a result of the small number of matched lines.
If the number of matches increases significantly, it means that the covariance matrices are being updated and the corrections made to the trajectory are being done, meaning that the knowledge of the robot's current position is close to reality.
We then changed the values of the thresholds for the matching algorithm to $T_r = 0.20 m$ and $T_\psi = 0.5 rad$ and obtained the values presented in Fig. \ref{fig:map5}, with a good number of matches. The path of the robot was very closed to the real trajectory in the fifth floor. We note that we are in a case that the odometry values have a high error. Of course, the values of the update state are not perfect in some situations. One of the reasons for this to happen is of environmental nature where, for example, an open door might be closed in the global map, contrary to the local scan, or some people might be walking in front the laser. In both of this cases, the robot can change his location but after a while the EKF value can converge to a better one.
Another problem that the robot faces is the odometry. At the beginning of the robot's trajectory, the position given by the EKF and the odometry are very close, but after some time the odometry is completely wrong, which could be a problem in regards to the robot's localization. With the laser visualizations and the previous location knowledge, represented not only in the position but also on the covariance matrices, the robot is able to correct those deviations and almost ignore the unreliable data.
The thresholds for the matching algorithm are $T_r = 0.20 m$ and $T_\psi = 0.5 rad$.
\section{Conclusion}
\label{sec: Conclusion}
In this paper, an EKF algorithm is implemented in order to achieve the most accurate localization of a mobile robot with four wheels. First, it is necessary to linearise the kinematic model of the robot to perform the prediction step of the EKF. To perform the matching step, it is necessary to extract the line parameters from the global map of the fifth floor in the North Tower of Instituto Superior T\'{e}cnico and extract the data from the Laser Rangefinder and compare both lines created using the LSQ method. If the lines from the global map are similar enough to the lines that the robot observes, then the matching can be done and the robot will know that the line he observes is in a determined position in the global map. Using that information, the update step is performed, updating the position of the robot and also updating the covariance matrices.
During the whole project, we faced several challenges. All of the algorithm was implemented using Python and the Robot Operating System, which were tools hard to grasp for us at the start, since none of the elements of the group knew them before the beginning of the academic term.
Another problem arised during the creation of clusters of points from the global map. Using the Corner Harris detection method, the lines created were not perfect. So, in order to achieve the maximum similarity, we did the extraction of the first and last points of each line by observing the map.
In this report, we did not present the values of the covariance matrices, but we will present them in the poster.
\begin{thebibliography}{1}
\bibitem{IEEEhowto:Teslic}
L.~Teslic, I.~Škrjanc, G.~Klan$\check{c}$ar, \emph{EKF-Based Localization of a Wheeled Mobile Robot
in Structured Environments}\hskip 1em plus
0.5em minus 0.4em\relax J Intell Robot Syst, 2010
\bibitem{IEEEhowto:Harris}
\label{harris}
C.~Harris, M. ~Stephens$\check{c}$ar, \emph{A combined corner and edge detection}\hskip 1em plus
0.5em minus 0.4em\relax Alvey Vision Conference, 1988
\bibitem{IEEEhowto:Lima}
P.~Lima, \emph{Notas das Aulas Te\'{o}ricas}\hskip 1em plus
0.5em minus 0.4em\relax, Lisboa - Instituto Superior T\'{e}cnico, 2015
\end{thebibliography}
\end{document}