Skip to content

Invited speakers

Torsten Schaub received his diploma and dissertation in informatics in 1990 and 1992, respectively, from the Technical University of Darmstadt, Germany, and his habilitation in informatics in 1995 from the University of Rennes I, France. From 1990 to 1993 he was a research assistant at the Technical University at Darmstadt. From 1993 to 1995, he was a research associate at IRISA/INRIA at Rennes. In 1995 he became University Professor at the University of Angers. Since 1997, he is University Professor for knowledge processing and information systems at the University of Potsdam. From 2014 to 2019, Torsten Schaub held an Inria International Chair at Inria Rennes – Bretagne Atlantique. Torsten Schaub has become a fellow of the European Association for Artificial Intelligence EurAI in 2012. From 2014 to 2019 he served as President of the Association of Logic Programming and was program (co-)chair of LPNMR’09ICLP’10ECAI’14, and upcoming KR’25. The research interests of Torsten Schaub range from the theoretic foundations to the practical implementation of reasoning from incomplete, inconsistent, and evolving information. His particular research focus lies on Answer set programming and materializes at potassco.org, the home of the open source project Potassco bundling software for Answer Set Programming developed at the University of Potsdam. Last but not least, Torsten Schaub is managing and scientific director at Potassco Solutions GmbH.

Nina Gierasimczuk is an an associate professor at the Department of Applied Mathematics and Computer Science of Danish Technical University (DTU Compute). Her main research interest lies in the logical aspect of learning in both single- and multi-agent context, and involves formal epistemology, formal learning theory, dynamic epistemic logic, computability theory, belief revision, and multi-agent systems. She is also studying the role of logic and logical modeling in cognitive science.

Magdalena Ortiz is a professor at the Faculty of Informatics at TU Wien and a member of the Institute of Logic and Computation (E192), Knowledge Based Systems Group. Her research is in the field of Knowledge Representation and Reasoning with particular emphasis in Description Logics (DLs). She is interested in the use of Description Logics in data access and data management. One of the central topics of here research is studying combinations of database-inspired query languages and DLs, with emphasis on identifying combinations with favourable computational properties.

Anni-Yasmin Turhan is a full professor for knowledge representation at the Computer Science Institute of Paderborn University. She has studied Informatics in Hamburg and has attained her PhD and second doctorate (habilitation) from Technische Universität Dresden where she held a tenured position as senior research and teaching fellow. Anni-Yasmin Turhan has been a (senior) PC member for IJCAI, AAAI, ECAI and KR and (co-)PC chair for the German AI conference as well as for RuleML+RR. She has been appointed co-General Chair of the upcoming edition of Declarative AI’25.

Her research is dedicated to reasoning in Description Logics with a focus on reasoning under inconsistency-tolerant semantics, i.e. under defeasible or repair semantics. Furthermore she investigates reasoning that admit controlled forms of vagueness, such as rough description logics or reasoning under approximate semantics. The methods that she develops make reasoning robust against inaccuracies in the data.

Nina Narodytska

Nina Narodytska

Nina Narodytska is a researcher at VMware Research by Broadcom. Prior to VMware, she was a researcher at Samsung Research America. She completed postdoctoral studies in the Carnegie Mellon University School of Computer Science and the University of Toronto. She received her PhD from the University of New South Wales. She was named one of “AI’s 10 to Watch” researchers in the field of AI in 2013. Her primary research interests lie at the intersection of formal methods and machine learning. She has worked on verification and explainability of ML models. Recently, she has focused on using large language models to enhance program verification and leveraging automated reasoning to improve the reasoning abilities of large language models.