Uni-Logo
You are here: Home Staff Noha Radwan Research
Articleactions

Noha Radwan

Research

Dynamic Object Invariant Space Recovery


Autonomous Street Crossing with City Navigation Robots

One of the major goals of robotics research is to develop intelligent platforms that are capable of undertaking a variety of useful tasks for their users. Accordingly, robots should have the capability of perceiving, understanding, and acting in their environment. For example, a city navigation robot designed for delivering parcels in urban cities should be able to plan and safely execute navigation actions within the city it is deployed in. A particularly challenging task in this context is the robust identification of and navigation along sidewalks, especially in the context of GPS outages. Furthermore, such a robot should be able to handle street intersections in a way that it does not impose harm onto itself or any of the traffic participants in its vicinity. The focus in this project is to develop techniques that enable a city navigation robot to seamlessly navigate street intersections without the need for designing hand-crafted definitions that are intersection dependant.

Project Website:[Website]

Relevant Papers

Noha Radwan, Abhinav Valada, and Wolfram Burgard
Multimodal Interaction-aware Motion Prediction for Autonomous Street Crossing
arXiv preprint arXiv:1808.06887, 2018.
[PDF] [Video]

Noha Radwan, Wera Winterhalter, Christian Dornhege, and Wolfram Burgard
Why Did the Robot Cross the Road? - Learning from Multi-Modal Sensor Data for Autonomous Road Crossing
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017.
[PDF] [Video]

Noha Radwan, and Wolfram Burgard
Multitask Learning for Reliable State Estimation
RSS Pioneers at Robotics: Science and Systems (RSS), Freiburg, Germany, 2019.
[PDF]

Noha Radwan, and Wolfram Burgard
Effective Interaction-aware Trajectory Prediction using Temporal Convolutional Neural Networks
Proceedings of the Workshop on Crowd Navigation: Current Challenges and New Paradigms for Safe Robot Navigation in Dense Crowds at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 2018.
[PDF]


Multitask Learning for Semantic Visual Localization

Semantic scene understanding and localization are indispensable components of the robot’s autonomy stack and natural precursors for any action execution or planning task. Despite the shared interdependencies between semantic scene understanding and localization, they have been for the most part tackled as disjoint problems. In this project, we propose a multitask learning architecture for jointly learning semantic, visual localization and odometry estimation. Our goal is to enable robust and efficient visual localization in both indoor and outdoor environments without the explicit need of defining hand-crafted features for localization.

Project Website:[Website]

Relevant Papers

Noha Radwan, Abhinav Valada, and Wolfram Burgard
VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry
IEEE Robotics and Automation Letters (RA-L), 2018.
[PDF] [Video]

Abhinav Valada, Noha Radwan, and Wolfram Burgard
Deep Auxiliary Learning for Visual Localization and Odometry
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018.
[PDF] [Video]

Noha Radwan, and Wolfram Burgard
Multitask Learning for Reliable State Estimation
RSS Pioneers at Robotics: Science and Systems (RSS), Freiburg, Germany, 2019.
[PDF]

Abhinav Valada, Noha Radwan, and Wolfram Burgard
Incorporating Semantic and Geometric Priors in Deep Pose Regression
Proceedings of the Workshop on Learning and Inference in Robotics: Integrating Structure, Priors and Models at Robotics: Science and Systems (RSS), Pittsburgh, USA, 2018.
[PDF]


Leveraging Deep Learning for Topometric Visual Localization

Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective however their accuracy and reliability is typically inferior to LiDAR-based methods. In this project, we propose a vision-based localization approach that learns from LiDAR-based localization methods by using their output as training data, thus combining a cheap, passive sensor with an accuracy that is on-par with LiDAR-based localization.

Relevant Papers

Gabriel Oliveira, Noha Radwan, Wolfram Burgard, and Thomas Brox
Topometric Localization with Deep Learning
Proceedings of the International Symposium on Robotics Research (ISRR), Puerto Varas, Chile, 2017.
[PDF] [Video]


Probabilistic Robot Localization using Textual Features

Accurate robot localization plays a crucial role in the success ofthe overall mobile robotic system. While robotic platforms operating in urban environments most commonly utilize the GPS signal as a reliable source of localization information, the signal quality is often poor due to the presence of high rises causing GPS outages. Textual information in the form of street and shop signs is highly abundant in urban environments and constantly used by humans for a majority of their daily tasks, ranging from finding locations to specifying their position. Consequently this information is constantly updated and highly accurate rendering it suitable as a source of stable features. In this project, we propose to leverage the abundant textual information in urban environments to estimate the 2D pose of a mobile robot operating in the scene.

Relevant Papers

Noha Radwan, Gian Diego Tipaldi, Luciano Spinello, and Wolfram Burgard
Do you see the Bakery? Leveraging Geo-Referenced Texts for Global Localization in Public Maps
Proceedings of the IEEE Int. Conf. on Robotics and Automation (ICRA), Stockholm, Sweden, 2016.
[PDF] [Video]


European Robotic Pedestrian Assistant 2.0

Urban areas are highly dynamic and complex and introduce numerous challenges to autonomous robots. They require solutions to several complex problems regarding the perception of the environment, the representation of the robot's workspace, models for the expected interaction with users to plan actions, state estimation as well as the states of all dynamic objects, the proper interpretation of the gathered information including semantic information as well as long-term operation. The goal of the EUROPA2 project, which builds on top of the results of the successfully completed FP7 project EUROPA, is to bridge this gap and to develop the foundations for robots designed to autonomously navigate in urban environments outdoors as well as in shopping malls and shops, for example, to provide various services to humans. Based on the combination of publicly available maps and the data gathered with the robot's sensors, it will acquire, maintain, and revise a detailed model of the environment including semantic information, detect and track moving objects in the environment, adapt its navigation behavior according to the current situation and anticipate interactions with users during navigation. A central aspect in the project is life-long operation and reduced deployment efforts by avoiding to build maps with the robot before it can operate. EUROPA2 is targeted at developing novel technologies that will open new perspectives for commercial applications of service robots in the future.

Project website: [Website]

Benutzerspezifische Werkzeuge