The aim of WORK PACKAGE 1 is to design the architecture of a modular Trustworthy AI-based System for monitoring Voice. Work is split into the following tasks:

Task 1.1. Design and Trustworthy assessment of the ExplainMe system

Task 1.2. Development of the ExplainMe system architecture

Although the way of speaking is important information in the context of many disorders [see e.g. Low et al. (2020). “Automated assessment of psychiatric disorders using speech: A systematic review. Laryngoscope investigative otolaryngology https://doi.org/10.1002/lio2.354], there is no trustworthy system for generating summary information about speech and state of health.

These task includes a systematic review of the literature and the development of socio-technical scenarios covering voice analysis in the context of medical and healthcare applications. Assessment of the reliability of the AI system for voice monitoring, in accordance with ethical and legal principles and the concept of Trustworthy Artificial Intelligence (TAI), will be carried out according to the Z-Inspection methodology [RV Zicari et al., Z-inspection®: a process to assess trustworthy AI, IEEE Transactions on Technology and Society 2 (2021)] meeting the guidelines of the European Commission High-Level Expert Group’s (EU HLEG) ” Ethics guidelines for trustworthy AI ,” indicating the need to create systems trustworthy and ethical standards in the development, implementation and use of artificial intelligence systems intelligence and also taking into account the socio-technical context in which the system will be used.