Sustainable and Trustworthy AI Solutions for Safety-Critical Cyber-physical Systems and Practical Applications

Leader: Shen Yin (NTNU)

Objectives  - The aim of WP5 is to pioneer sustainable and trustworthy AI solutions, and to target their seamless incorporation into safety-critical cyber-physical systems and multifaceted applications. The challenges are twofold: ensuring unshakeable trust in the safety-critical cyber-physical platforms and facilitating sustainable real-time anomaly detection across a spectrum of industries. WP5 commits to navigating this complex interplay with a structured and interdisciplinary methodology.

  • Establishing Trustworthy AI for Safety-Critical Cyber-physical Environments: This objective will undertake the development of AI paradigms that prioritise transparency, user interpretability and formidable fault resilience. The commitment is to align with the functional safety benchmarks meticulously while not compromising on performance and stringent safety parameters.
  • Sustainable Real-time Anomaly Detection across Sectors: This objective is geared towards excavating sustainable and practical methodologies that will enable dependable and instantaneous anomaly detection. Solutions arising from this endeavour will ensure resource efficiency, scalability, data privacy, resilience, transparency, interpretability and cross-domain applicability.
  • Harmonising Trust with Real-time Sustainability in AI: Moving beyond individual challenges, this objective will embark on a holistic exploration whose aim will be to discover a nexus where AI can provide both immediate responses and trustworthiness in critical conditions. This pivotal convergence will seek to establish a paradigm where rapid-response capabilities harmoniously coexist with enduring reliability.

Tasks:

T1 Trust Reinforcement in Safety-Critical AI Ecosystems

While addressing the palpable need for trust in critical frameworks, this task is rooted in the granularities of AI implementation. Emphasis will be placed on conceptualizing AI blueprints that will accentuate transparency. Algorithmic robustness will be rigorously vetted across diverse challenges. Bespoke workshops, in tandem with industry partnerships, will ensure that the AI frameworks are attuned to the evolving safety regulations. The prime beneficiaries here will be industries such as automotive and aerospace.


T2 Pioneering Sustainable Anomaly Detection Mechanisms

This task shifts the focus from theoretical anomaly detection design to practical and actionable implementations. Its aim is to evaluate the contemporary anomaly detection paradigms using sustainable metrics including energy efficiency, scalability, data privacy, resilience, interpretability and adaptability. These metrics will also guide the development of practical solutions. Collaborative efforts across various sectors will provide valuable insights, which will push the boundaries of innovative detection methods. The ultimate goal is to revolutionise the detection mechanisms by making them sustainable and applicable across diverse domains.


T3 Bridging Trustworthiness and Sustainable AI Dynamics

This task recognises the imperative of converging trust with sustainability. A multidisciplinary approach will engage both AI architects and real-time data analysts to chart out a middle path. Central to this will be pilot projects in which AI systems will be tested for their real-time reflexes and simultaneously assessed for their trustworthiness standards. Integral stakeholders for this phase are sectors that juxtapose the need for immediate AI insights with an undiminished emphasis on trust such as intelligent urban planning or critical infrastructure oversight.