About Project

Terrain-Aided Navigation : Localization in GPS-denied environments for autonomous UAV navigation

This project proposal is designed to advance the development of autonomous navigation schemes for Unmanned Aerial Vehicles (UAVs), in regions where traditional positioning systems, such as Global Positioning System (GPS) and stereo-vision technologies, are non-operational due to adversarial interference or systemic constraints. The proposed research will be centered on an innovative approach to localization, emphasizing the study and implementation of a Terrain Aided Navigation (TAN) algorithm as a primary navigation methodology. This technique involves the comprehensive use of terrain features to aid in navigation and is particularly advantageous in settings where GPS signals are inaccessible or unreliable.

Project Tasks

Four main pipelines: Perception, State Estimation, Prediction, Tracking

Objective 1

Develop a comprehensive estimation scheme for UAV’s latitude and longitude using auxiliary sensors like IMU, Radar, LIDAR, Barometer etc. This involves formulating a sensor fusion model that can incorporate measurement from all visual and non-visual sensors to give an estimate of the UAV’s current location.


Objective 2

Create accurate sensor and UAV models to produce realistic simulation data, to test the efficacy of the proposed algorithm(s) . This will involve online/offline batch processing of simulated flight data from Gazebo Physics Engine. The complete scheme will be developed in ROS framework and all sensor data will be available as ROS topics.


PSDSARC Project ID: PID-0000B5_01_02

Amount Approved: 300,000 SAR

Time Duration: 12 months

Start Date: 1st December 2023

Terrain Aided Navigation (TAN) Approaches based on Visual and Non-Visual Sensors

Broadly Terrain-aided navigation (TAN) can be categorized into TERCOM (Terrain Contour Matching) which utilizes digital elevation maps (DEM), and Digitized Scene-Mapping Area Correlator (DSMAC). Both techniques are used for localization in GPS-denied environments, and they can be complementary in certain situations.

Photo 3 Description

TERCOM (Terrain-Contour Matching)

Principle: TERCOM relies on matching the observed terrain contours with the pre-stored digital elevation maps. It compares the current sensor readings, usually from altimeters or terrain-following radar, with the expected readings from the DEM to estimate the platform's position.

Advantages: TERCOM is effective in areas with distinctive terrain features, such as mountains, valleys, or coastlines. It works well at low altitudes and can provide accurate localization in rough terrains.

Limitations: TERCOM may face challenges in areas with monotonous or uniform terrain where distinctive features are lacking. It's also sensitive to sensor noise and may require accurate altitude measurements.

DSMAC (Digitized Scene-Mapping Area Correlator)

Principle: DSMAC involves comparing real-time sensor images with pre-stored reference images or maps. It uses image correlation techniques to identify features in the scene and determine the platform's position.

Advantages: DSMAC is effective in environments where visual features can be easily discerned. It can be used in both day and night conditions and is less reliant on specific terrain types.

Limitations: DSMAC may struggle in adverse weather conditions, low-visibility scenarios, or areas with insufficient visual features. It can also be computationally demanding.

Complementarity: TERCOM and DSMAC can be complementary in situations where one system's limitations are compensated by the strengths of the other. For example, TERCOM may excel in mountainous regions with distinct terrain features, while DSMAC may perform better in areas with visual features but less distinct terrain.

Block diagram of the TERCOM Scheme for GPS denied Navigation

Photo 3 Description

Research problem #1: Searching the DEM (in online mode)

Photo 3 Description

A primary challenge in implementing the TERCOM scheme lies in achieving efficient retrieval of terrain elevations from the Digital Elevation Maps (DEMs). Due to the immense size of these DEMs, which are essentially large matrices, searching for specific data strips within a millisecond timeframe presents a significant computational bottleneck. To address this, we aim to develop efficient search algorithms that demonstrably reduce computational time and facilitate expeditious retrieval of required terrain data. The accuracy of Unmanned Aerial Vehicles' (UAV) estimated latitude and longitude is directly proportional to the resolution of the utilized Digital Elevation Maps (DEMs).

Lower resolution DEMs:
  1. Limit the attainable accuracy of the estimation
  2. Impair the effectiveness of TERCOM schemes due to their inherent coarse-grained nature. This is further compounded by the limited availability of high-resolution DEMs, often resulting in compromises in both estimation accuracy and TERCOM implementation.

DEM Availability

Photo 3 Description

Converting the DEM to suitable format

Photo 3 Description
Photo 3 Description

Research problem #2: Fusion of all information from Visual and Non-Visual Sensors using EKF

  • Extended and Error State Kalman Filters employ a non-linear system matrix, which is computationally very demanding for online prediction. (Example of a non-linear System Matrix is shown in the next slide).
  • Based on the literature survey, we will decide the most appropriate scheme for our project. (in cha Allah)
Meet The Team

Our Expert Team Members

Meet the dedicated professionals behind our groundbreaking work in autonomous drone system at the RIOTU Lab, Prince Sultan University.

1

Research Center Director

Prof. Anis Koubaa

2

Principle Investigator

Dr. Muhammad Bilal Kadri

3

Robotics Team Lead

Dr. Mohamed Abdelkader

4

Researcher

Eng. Wadii Boulila

5

Postdoc

Dr. Imen Jarraya

6

AI Senior Researcher

Dr. Adel Ammar

7

Research Engineer

Eng. Abdulrahman AlBatati

8

Research Engineer

Eng. Khaled Gaber

Contact

GET IN TOUCH

Address

Building 101, Prince Sultan University, PPP2+CGP, Nasir Ben Farhan Street, King Salman Neighborhood, Riyadh 12435.

© Robotics and Internet-of-Things Lab.