Collaborative Sensor Fusion System: Concept, Modelling and Applications

Leader: Jerry C.-W. Lin (SUT)

Objectives  - The aim of this WP is to combine data from multiple sensors or sources in order to improve perception, decision-making and overall system performance, in which the designed system can be deployed for the different applications and domains, e.g., collaborative AGVs or drones). The objectives of this WP are:

  • The developed system will bring together data from a variety of sensors and intelligent devices, e.g., cameras, LiDAR, radar, ultrasonics and GPS, among others. These sensors will provide different types of information about the environment and the collaborative sensor fusion system will seek to make the most of their complementary strengths for decision-making.
  • The designed system will improve the perception capabilities of systems or devices such as AGVs, drones or industrial robots. By fusing data from multiple sensors, the system will better recognise and understand objects, obstacles, the terrain and other elements in the environment.
  • The designed system will process and use the combined sensor data in real-time. This will enable timely decision-making and action, especially in dynamic and rapidly changing situations. Thus, lightweight models that can be deployed by the sensors or intelligent devices should be studied and designed.

Tasks:

T1 Developing a Multi-Sensor Data Integration Framework

It will be necessary to develop a framework to facilitate the seamless integration of data from multiple sensors and intelligent devices for efficient decision-making. The designed system is expected to have a high degree of compatibility and scalability in order to be able to integrate/add new sensors and devices. Another aim of this task will be to focus on the data pre-processing steps to ensure data quality and consistency. Since the data will be from different and variant devices, investigating the cross-modal data alignment will be required to ensure data synchronisation. It will also be important to develop the fusion algorithms and models to combine the data from various devices and sensors efficiently and effectively in order to extract, discover and mine the valuable information.

 

T2 Object/Obstacle Recognition and Detection Models for Accurate Perception

The first aim of this task will be to develop the feature extraction algorithms to well represent the significant features from the multi-sensor data that can be useful and helpful for object and obstacle recognition. The algorithms for object detection and tracking will also be discussed and designed to ensure the accurate and real-time identification of the objects. The designed algorithms should be able to better understand the overall scene, e.g., the spatial relationships between the objects and the context in the environment. VR/AR models will also be considered in this task to achieve a better 3D perception capability, which will provide more alternative and efficient solutions for the better perception of the designed system.

 

T3 Application of Machine Learning, Visual Analytics and Explainable Artificial Intelligence in the Healthcare Industry

This task aims to create a first use case in which we combine the main advances of Tasks T2.1 and T2.2. We will merge and integrate the use of deep learning models with advanced visualization techniques together with explainable artificial intelligence processes and methods to solve problems in the domain of healthcare, as well as to visualize and interpret the solution provided in the most appropriate way possible, so that the medical team and patients can understand the reasoning of the system and the reason behind the output provided to each specific context. Our goal within the task is to explain, in a comprehensible way, the information behind back-box models and reveal how the decisions are made. We will explore the most advanced and recent techniques and how the most important underlying information from the different models may be extracted and presented in real world scenarios. We will also deal with some of the most complex identified challenges regarding time series datasets in healthcare such as automatic model generation and discovery of event data to understand and explain how some features may trigger others.

 

T4 Deployment and Evaluation of Developed Models and Tools in Real-world Scenarios

This task constitutes a pivotal phase that will be dedicated to the practical deployment and empirical assessment of the diverse models and techniques that had been meticulously crafted in Tasks T2.1, T2.2 and T2.3. With unwavering attention paid to the translation of the theoretical constructs into tangible solutions, this task takes centre stage in implementing and rigorously validating these innovative methodologies within genuine industrial landscapes. By orchestrating the integration of our cutting-edge developments into authentic, industry-specific contexts, we anticipate not only assessing the feasibility of these techniques but also forging strategic collaborations with the pertinent sectors and industrial stakeholders who share our vision and objectives. This intersection of academia and industry serves as a testament to the pragmatic potential of our models and promises to offer tangible benefits in real-world settings.