Responsible AI

Contact person: Miria Grisot    
Keywords: responsible AI, human-centred AI, accountability, human values    
Research group: Digital Innovation (DIN)
Department of Informatics
 

The overarching aim of Responsible AI is to ensure societal well-being, preventing loss of control for users and developers as well as bias and discrimination for the involved human beings. Responsible AI is the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values. Given the major impact that AI can have, it is important to reflect, discuss and develop critical perspectives on Responsible AI including research on issues of power, ideology and institutional change. A critical approach implies a perspective that problematises and questions deep-seated  assumptions related to social issues such as freedom and social control associated with the impact of information technologies. This research is positioned in the field of Information Systems (IS) research. IS research is an inherently sociotechnical discipline and is well-positioned to address the crossroads of humanistic, organisational, and technical concerns taking a critical perspective on Responsible AI. The aim of the research is to develop further the current insight from IS research and delineate Responsible AI in a way that balances efficiency-oriented instrumental outcomes with principle-oriented humanistic perspectives in a virtuous circle.

Research topics:

  • Research on understanding the processes of including the skills, interests, and experiences of heterogeneous actors (e.g., developers, clerical workers, managers, policy makers, citizens) into the design and deployment of AI ensuring benefits for all human beings, including future generations.
  • Research on the situated and contextual aspects of AI technology use to better understand how we can achieve synergies between humans and machines seeking modalities that allow humans to maintain meaningful control and at the same time enjoy the benefits of trustful technologies (but without viewing machines as moral agents). 
  • Research on AI technology production processes and on the actual work of professionals with different roles in AI design, deployment and monitoring are needed, especially studies investigating the real-world tensions, conflicting demands and dilemmas and their resolutions.
  • Research on the macrosocial and institutional mechanisms for understanding how power structures shape AI and how AI establishes or reinforces power structures, who gets to benefit and who may be harmed. Such value-related questions need to be answered before we can produce technical solutions and human-friendly designs.

Mentoring and internship will be offered by a relevant external partner.