My research interests revolve around the problem of combining multiple modalities, such as images and text, in machine learning from a language understanding point of view. During my Master thesis I studied knowledge graphs, something I recently started coming back to as a complement to deep learning approaches. Some of my research has focused on probing semantic language representations to better understand what concepts they capture.
Currently, I am dipping my toes in the neuro-symbolical models to draw from strengths in deep learning and more traditional learning methods. The utilization of knowledge graphs and logic inference is of special interest. The aim is to use this in a multimodal language grounding setting, further down the road.
Apart from the general interest in most areas of computing science, other interests include ethical AI, music, and vintage vehicles.
I currently have no teaching duties as I focus on my research, but have been a teaching assistant and a lecturer on many of the courses at the department, most recently on AI and machine learning. Others include distributed systems, data communications, interaction and design, and programming courses.
I have tutored a couple of Master and Bachelor students in their thesis work, and am open for more such collaborations.