- Home
- About TUAI
- Research
- Doctoral Candidates
- Networking
- Consortium
- Contact
Leader: Shen Yin (NTNU)
Objectives - The aim of WP5 is to pioneer sustainable and trustworthy AI solutions, and to target their seamless incorporation into safety-critical cyber-physical systems and multifaceted applications. The challenges are twofold: ensuring unshakeable trust in the safety-critical cyber-physical platforms and facilitating sustainable real-time anomaly detection across a spectrum of industries. WP5 commits to navigating this complex interplay with a structured and interdisciplinary methodology.
Tasks:
T1 Trust Reinforcement in Safety-Critical AI Ecosystems
While addressing the palpable need for trust in critical frameworks, this task is rooted in the granularities of AI implementation. Emphasis will be placed on conceptualizing AI blueprints that will accentuate transparency. Algorithmic robustness will be rigorously vetted across diverse challenges. Bespoke workshops, in tandem with industry partnerships, will ensure that the AI frameworks are attuned to the evolving safety regulations. The prime beneficiaries here will be industries such as automotive and aerospace.
T2 Pioneering Sustainable Anomaly Detection Mechanisms
This task shifts the focus from theoretical anomaly detection design to practical and actionable implementations. Its aim is to evaluate the contemporary anomaly detection paradigms using sustainable metrics including energy efficiency, scalability, data privacy, resilience, interpretability and adaptability. These metrics will also guide the development of practical solutions. Collaborative efforts across various sectors will provide valuable insights, which will push the boundaries of innovative detection methods. The ultimate goal is to revolutionise the detection mechanisms by making them sustainable and applicable across diverse domains.
T3 Bridging Trustworthiness and Sustainable AI Dynamics
This task recognises the imperative of converging trust with sustainability. A multidisciplinary approach will engage both AI architects and real-time data analysts to chart out a middle path. Central to this will be pilot projects in which AI systems will be tested for their real-time reflexes and simultaneously assessed for their trustworthiness standards. Integral stakeholders for this phase are sectors that juxtapose the need for immediate AI insights with an undiminished emphasis on trust such as intelligent urban planning or critical infrastructure oversight.