The project is carried out in cooperation with Britenet Med sp. z o.o., scientists from the Institute of Psychiatry and Neurology in Warsaw and with scientific partners from the following centers:

Prof. José María Alonso Moral

Research Centre on Intelligent Technologies (CiTIUS), University of Santiago de Compostela (USC), Spain

Prof. Jose Maria Alonso Moral is an outstanding scientist and a leading expert in the world on decent of trust in artificial intelligence.
Prof. Jose Maria Alonso Moral will bring significant added value to the project in in the form of professional knowledge in the field of the latest achievements in computational intelligence, but also in the field of managing research and development teams. He was a visiting professor at several universities in Europe.
Prof. Alonso led teams that developed, among others, systems supporting the prevention and monitoring of neurodegenerative diseases and decision support systems in care for the elderly, in the context of cardiac rehabilitation and many others. In addition, he is a member numerous committees and international initiatives on Artificial Intelligence, including the leading European Conference on Artificial Intelligence (ECAI) or the IEEE Computational Intelligence Society. He has published over 170 articles in international journals, collective works and materials conference. He has received numerous awards, including the Sobresaliente Cum Laude award and the Doctor Europeus.
Prof. Alonso Moral will join the scientific work on the development of the explainable system architecture and scientific cooperation on aspects of credibility and developing trustworthy artificial intelligence in the area of medical applications.
Prof. Alonso will join to the international advisory board of the ExplainMe project.

Prof. Roberto V. Zicari

Trustworthy AI Lab, ARCADA University of Applied Science, Helsinki

Prof. Roberto V. Zicari is a professor at Yrkeshögskolan Arcada in Helsinki, Finland and Seoul National University in South Korea.
Prof. Zicari leads a team of international experts who defined the Z-Inspection standard enabling the assessment trustworthy AI solutions. Z-inspection is also an international a research community that trains organizations and entities to evaluate the use of artificial intelligence.
The lab connects academia and civil society, including AI developers, students, end users, researchers and interested parties.
Prof. Zicari is the editor of the portal ODBMS.org and the ODBMS Industry Watch Blog. He was a visiting professor at Entrepreneurship and Technology Center at the Faculty of Industrial Engineering and Operational Research at University of California at Berkeley (USA). He is also an internationally recognized expert in in the field of Databases and Big Data. Previously, he was a professor of Databases and Information Systems (DBIS) at Goethe University in Frankfurt, Germany, where he founded the Frankfurt Big Data Lab.
Prof. Zicari will contribute to the ExplainMe project in terms of his experience in designing and evaluating trustworthy AI. His knowledge also includes ethics and artificial intelligence, innovation and entrepreneurship.

Prof. Jochen Leidner

Coburg University of Applied Science and Arts, Germany

Professor Leidner is holder of several patents in the areas of information retrieval, natural language processing and mobile computers and two-time winner of the Thomson Reuters Inventor of the Year Award for the best patent pending. His experience includes positions as Research Director at Thomson Reuters and Refinitiv in London, where he headed the research and development team. Before joining Thomson Reuters, he worked for SAP, founded and co-founded many start-ups. He is also a visiting professor at the Faculty of Computer Science University of Sheffield. His expertise includes computational linguistics (University of Erlangen-Nuremberg), Master’s degree in Computer Speech, Text and Internet Technologies (University of in Cambridge) and a PhD in Computer Science (University of Edinburgh), for which he won the Consortium Award ACM SIGIR Ph.D. His scientific achievements include leading teams that developed systems answering questions in the open domain QED and ALYSSA (assessed in US NIST/DARPA TREC).

Dr. Martina Daňková

IT4Innovations Center of Excellence, a branch of the University of Ostrava, Czech Republic

Dr. Martina Daňková works at the Centre IT4Innovations Excellence, which is a branch of the University of Ostrava belonging to the National IT4Innovations Supercomputing Center (NSC IT4I), an inter-university research center established in 2011 by the Operational Programme PO Research and Development for Innovation in Priority Axis I (European Centres of Excellence). The Centre participates in the implementation of separate projects related to knowledge transfer and cross-sectoral cooperation, e.g. “AI-Met4Laser: Industrial Research Consortium and development of new laser applications, technologies using artificial intelligence methods”. Among the activities in In the field of medical sciences, the OP PIK CZ.01.1.02/0.0/0.0/21_374/0026654 project “Methods artificial intelligence for automatic analysis of medical images”, which focuses on fast automatic diagnosis of neurodegenerative diseases (mainly Parkinson’s disease) based on image sources (MRI, CT, sono), where the work is performed with high precision by the AI system.
Dr. Martina Daňková will provide her valuable support about the developed computational intelligence system and its theoretical foundations with particular focus on the models of knowledge representation and linguistic terms based on fuzzy logic.

Dr. Jose Sousa

Sano-Centre for Computational Personalised Medicine

Prof. Jose Sousa from Sano-Centre for Computational Personalised Medicine has over 20 years of experience in managing IT systems and research and development projects.of great social importance.
Prof. Sousa is interested in establishing cooperation on explainable artificial intelligence models to support doctors and enable more effective medical interventions and supporting in disseminarion of ExplainMe results within the SANO network, an international research foundation, established thanks to the IRAP programme of the Foundation for Polish Science, in the field of computational medicine.

Dr. Hab. Gabriella Casalino

Laboratory of Computational Intelligence at the Faculty of Computer Science, University of Bari, Italy

Gabriella Casalino is the Assistant Professor at the Computational Intelligence Laboratory at The Computer Science Department of the University of Bari. She participates in numerous research and development projects focusing on explainable IT solutions in the fields of medicine and education.
Prof. Casalino is actively involved in the IT community, he is a senior member of the IEEE society, he is a deputy editor of the Journal of Intelligent and Fuzzy Systems and guest co-editor of several journals.
Prof. Casalino’s research activity focuses on explainable algorithms of computational intelligence, with particular emphasis on the exploration of time-ordered sensor data. In In the light of the European Commission’s AI ACT, the development of explainable methods is mandatory, especially in the field of medicine, which is classified as a high-risk field.
Prof. Casalino will contribute to scientific work in the field of explainable artificial intelligence algorithms.
Intelligence.
Prof. Casalino has extensive experience in the area of effective supervised and semi-supervised learning algorithms for uncertain data streams.