The Navya Driver software has a backbone of 3 modules: Perception, Decision and Action, each composed of submodules. Around this backbone, other satellite modules participate in the operation of the system.
Allows the vehicle to understand the surroundings in which it is located, to detect obstacles and to anticipate movement.
Data is collected through sensors and cameras, and processed by our software Navya Driver in order to identify obstacles and estimate their position, speed, and behavior. This information is then transmitted to the Decision module.
The SENSORS team’s objective is to determine the best possible combination of sensors for the vehicle. To do this, our team is constantly monitoring the latest sensor innovations available on the market. The SENSORS team may be led to optimise the configuration of the sensors hardware architecture or the operation of the sensors. In this sense, it has unique expertise in configuring the sensors according to specific needs – for example, defining the optimal speed at which to run them or the frequency at which to send data.
Over the years, Navya has developed a unique know-how in terms of sensor configuration and calibration. Configuration refers to the adjustment of sensors and the optimisation of their operation. Calibration refers to the optimal positioning of the sensors and associated procedures.
The OBSTACLE module enables the autonomous driving systems developed by Navya to detect, track and classify surrounding obstacles. It is able to list in real time the obstacles surrounding the vehicle and for each of them determine its position, speed and shape with a follow-up of the evolution of this information over time. The OBSTACLE module processes billions of pieces of information provided by the sensors, in real time, to build information that can be used by the Navya Driver and enable optimal driving decisions to be made.
One of Navya’s areas of expertise is to transmit the vehicle’s kinematics: position, orientation and speed with ultra-precision and in real time. The LOCALIZATION module generates this information thanks to the data transmitted by each of the different sensor types fitted to the vehicle: LiDAR, GNSS, Odometry, IMU and Camera. The data provided by the sensors are pre-processed through the algorithmic building blocks developed by Navya for each sensor. The LOCALIZATION module will then merge this information to provide the vehicle’s position in a maximum number of environments.
At this stage, our software Navya Driver calculates and determines the route and path of the vehicle.
The software combines data received from sensors with V2X information (i.e. traffic light communication) and a high-definition map of the site, to determine an optimal trajectory of the vehicle taking into account the safety and comfort of operations.
The trajectory and driving commands are sent to the Action module.
The DRIVING module allows Navya Driver to take decisions generating vehicle actions according to the information perceived by the vehicle. The DRIVING module receives the information transmitted by the OBSTACLE and LOCALIZATION modules, which can be supplemented by V2I (Vehicle-to-Infrastructure) information for example. On the basis of this information, the DRIVING module determines the optimal trajectory of the vehicle taking into account safety and comfort of operations. It enables Navya Driver to determine the speed profile and the vehicle’s spatial trajectory and to merge them in order to construct the vehicle’s spatial and temporal trajectory, in line with the operations missions it has been given.
Our driving technology allows the Navya Driver software to control the main driving controls of the vehicle and to determine the optimal speed profile and spatial trajectory for the vehicle.
Navya Driver merges them to build the vehicle’s space-time trajectory. Its technology is both the conductor and driver of the system.
All the autonomous driving systems operate with the help of very high definition maps allowing the “DRIVER” to confirm its location and to compare the environment perceived in real time by the sensors. These maps of various types are produced and updated by the Navya teams. The mission of the CARTOGRAPHY engineering team is to provide Maps to the CARTOGRAPHY module to Navya Driver. The main type of maps developed by the Navya teams are LiDAR Maps representing the 3D environment perceived by the sensors. These maps are then enriched in order to integrate a maximum of information that can be used by the Navya Driver when the vehicle is in operation.
The team of engineers working on the development of Human-Machine Interface (HMI) aims to help humans – supervisors, passengers, maintenance engineers and road users – in their interactions with the mobility solution. In a complex system such as an autonomous driving system, processing a lot of data in real time, our know-how is to make the use of the vehicle as simple as possible and to prioritize the information. HMI teams are in charge of developing multimedia actions inside and outside the vehicle: sound and visual elements. To do this, they are constantly attentive to market innovations, such as what is being done in the public transport sector for passenger transport.
Today, it is impossible to test and validate a vehicle’s performance solely through real-world testing. The number of kilometres to be covered represents too many human, material and financial resources. All the more so for autonomous vehicles, thanks to which we want to significantly reduce mortality on the roads or optimise logistics flows, where the requirements are higher than for conventional vehicles.
The SIMULATION engineering team aims to provide a virtual suite to the entire Navya R&D department to test and validate their algorithms. This suite allows iterative testing of the Navya Driver software’s sub-parts in a unitary way before testing a version of Navya Driver as a whole.
The simulation process allows us to improve our ability to make the Navya Driver ever more efficient and experienced by capitalizing on our experience gained in operation since 2015 while optimizing our resources.