Tutorials
Tutorial 2 – Designing Virtual Knowledge Graphs
Tutorial 3 – Coming soon
Tutorial 1
Neurosymbolic Visual Commonsense: On Integrated Reasoning and Learning about Space and Motion in Embodied Multimodal Interaction
We present recent and emerging advances in computational cognitive vision addressing artificial visual and spatial intelligence at the interface of (spatial) language, (spatial) logic and (spatial) cognition research. With a primary focus on explainable sensemaking of dynamic visuospatial imagery, we highlight the (systematic and modular) integration of methods from knowledge representation and reasoning, computer vision, spatial informatics, and computational cognitive modelling. A key emphasis here is on generalised (declarative) neurosymbolic reasoning & learning about space, motion, actions, and events relevant to embodied multimodal interaction under ecologically valid naturalistic settings in everyday life. Practically, this translates to general-purpose mechanisms for computational visual commonsense encompassing capabilities such as (neurosymbolic) semantic question-answering, relational spatio-temporal learning, (non-monotonic) visual abduction etc.
The presented work is motivated by and demonstrated in the applied backdrop of areas as diverse as autonomous driving, cognitive robotics, design of digital visuoauditory media, and behavioural visual perception research in cognitive psychology and neuroscience. More broadly, our emerging work is driven by an interdisciplinary research mindset addressing human-centred responsible AI through a methodological confluence of AI, Vision, Psychology, and (human-factors centred) Interaction Design.
Presented by:
Mehul Bhatt is a Professor within the School of Science and Technology at Orebro University (Sweden). Broadly, his research stands at the interface of Artificial and Human Intelligence. Key basic research focusses on the formal, cognitive, and computational foundations for AI technologies with a principal emphasis on knowledge representation, semantics, integration of reasoning & learning, explainability, and spatial representation and reasoning, and computational cognitive modelling. Visuospatial cognition and computation has been an area of intense activity from the viewpoint of interdisciplinary research; here, his work in Spatial Cognition and AI particularly emphasises the study of human-behaviour (i.e., embodied multimodal interaction) in naturalistic settings as a principal means of AI technology driven human-centred cognitive assistance in planning, decisionmaking, design situations requiring an interplay of commonsense, creative, and specialist visuospatial thinking.
Mehul Bhatt steers CoDesign Lab EU , an initiative aimed at addressing the confluence of Cognition, Artificial Intelligence, Interaction, and Design Science for the development of human-centred cognitive assistive technologies and interaction systems. Since 2014, he directs the research and consulting group DesignSpace and pursues ongoing research in Cognitive Vision and Perception.
Mehul Bhatt obtained a bachelors in economics (India), masters in information technology (Australia), and a PhD in computer science (Australia). He has been a recipient of an Alexander von Humboldt Fellowship, a German Academic Exchange Service award (DAAD), and an Australian Postgraduate Award (APA). He was the University of Bremen nominee for the German Research Foundation (DFG) Award: Heinz Maier-Leibnitz-Preis 2014. Previously, Mehul Bhatt was Professor at the University of Bremen (Germany).
Tutorial 2
Designing Virtual Knowledge Graphs
Complex data processing tasks, including data analytics and machine/deep learning pipelines, in order to be effective, require to access large datasets in a coherent way. Knowledge graphs (KGs) provide a uniform data format that guarantees the required flexibility in processing and moreover is able to take into account domain knowledge. However, actual data is often available only in legacy data sources, and one needs to overcome their inherent heterogeneity. The recently proposed Virtual Knowledge Graph (VKG) approach is well suited for this purpose: the KG is kept virtual, and the relevant content of data sources is exposed by declaratively mapping it to classes and properties of a domain ontology, which users in turn can query. In this talk we introduce the VKG paradigm for data access, present the challenges encountered when designing complex VKG scenarios, and discuss possible solutions, in particular the use of mapping patterns to deal with the complexity of the mapping layer and its relationship to domain ontologies.
Overview:
1. Motivation and VKG Solution
2. KG Components
3. Formal Semantics and Query Answering
4. Designing a VKG System
5. Conclusions
Presented by:
Diego Calvanese is a full professor at the Research Centre for Knowledge and Data (KRDB) of the Faculty of Engineering, Free University of Bozen-Bolzano (Italy), where he is the head of the Institute for Artificial Intelligence and Computer Science. From Nov.~2019 to Oct.~2024 he has also been Wallenberg Guest Professor in Artificial Intelligence for Data Management at UmeƄ University (Sweden). His research interests concern foundational and applied aspects in Artificial Intelligence and Databases, notably formalisms for knowledge representation and reasoning, Virtual Knowledge Graphs for data management and integration, Description Logics, Semantic Web, and modeling and verification of data-aware processes. He is a Fellow of the Association for Computing Machinery (ACM), of the European Association for AI (EurAI), and of the Asia-Pacific AI Association (AAIA). He is the originator and a co-founder of Ontopic, the first spin-off of the Free University of Bozen-Bolzano, founded in 2019, and developing AI-based technologies for data management and integration.