The social context in which AI systems function becomes more important when the AI systems are becoming more autonomous and take more important decisions.
We are investigating computational models of social concepts like norms, practices, organizations, etc. to enhance AI systems in a way that they can be aware of their social context and can behave as people expect them to do. E.g. we expect a care robot to insist to a patient to take his medicine, but not when he is just on the phone with his partner. And when we interact with a chatbot to apply for a licence to hunt moose it should be able to explain why a licence is not given.
We apply the developed theory in numerous applications, ranging from chatbots and social robotics (like in the examples above) to social simulations for policy makers and strategic organizational decisions.