Trustworthiness is a crucial consideration in modern AI decision-making systems. As a fundamental aspect of human interaction, AI trustworthiness can be improved through transparency and explainability in robotic systems. This project aims to identify factors that may undermine the trustworthiness of data and models used in human-robot interaction, and further investigates common and specific trustworthiness issues related to fairness and safety in human-robot interaction applications. An AI trustworthiness model will be developed and validated ensuring that both data and models of human interaction are robust, especially in the selected industry use cases.
The postdoc position is linked to the research group Deep Data Mining, which focuses on fusing data science and artificial intelligence and developing AI trustworthiness (e.g., fairness, privacy) models. The project is part of the EU project XSCAVE whose ambition lies on large scale deployment of autonomous heavy mobile machines in earthmoving, forestry and urban logistics industries. The XSCAVE consortium involves eleven partners from all over Europe. This offers excellent opportunities for international exchange and collaboration with leading research groups and companies in the field of AI robotics, large language modelling, simulation, mobile robotics and offroad heavy equipment.
#UMU #Umea #UmeaUniversity #StudyinSweden #WorkinSweden #Researchjobs #Teachingjobs #Academicjobs #NordicAcademicjobs #phd #postdoc #NordicHE #NordicResearch #NordicCentreinIndia