To Örebro University

oru.seÖrebro universitets publikasjoner
Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 160) Visa alla publikasjoner
Bocklandt, S., Derkinderen, V., Kimmig, A. & De Raedt, L. (2025). Approximate Compression of CNF Concepts. In: Dino Pedreschi; Anna Monreale; Riccardo Guidotti; Roberto Pellungrini; Francesca Naretto (Ed.), Discovery Science: 27th International Conference, DS 2024, Pisa, Italy, October 14–16, 2024, Proceedings, Part II. Paper presented at 27th International Conference on Discovery Science, Pisa, Italy, October 14-16, 2024 (pp. 149-164). Springer, 15244
Åpne denne publikasjonen i ny fane eller vindu >>Approximate Compression of CNF Concepts
2025 (engelsk)Inngår i: Discovery Science: 27th International Conference, DS 2024, Pisa, Italy, October 14–16, 2024, Proceedings, Part II / [ed] Dino Pedreschi; Anna Monreale; Riccardo Guidotti; Roberto Pellungrini; Francesca Naretto, Springer, 2025, Vol. 15244, s. 149-164Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We consider a novel concept-learning and merging task, motivated by two use-cases. The first is about merging and compressing music playlists, and the second about federated learning with data privacy constraints. Both settings involve multiple learned concepts that must be merged and compressed into a single interpretable and accurate concept description. Our concept descriptions are logical formulae in CNF, for which merging, i.e. disjoining, multiple CNFs may lead to very large concept descriptions. To make the concepts interpretable, we compress them relative to a dataset. We propose a new method named CoWC (Compression Of Weighted Cnf) that approximates a CNF by exploiting techniques of itemset mining and inverse resolution. CoWC compresses the CNF size while also considering the F1-score w.r.t. the dataset. Our empirical evaluation shows that CoWC outperforms alternative compression approaches.

sted, utgiver, år, opplag, sider
Springer, 2025
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349
Emneord
Concept learning, Formula compression
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-120608 (URN)10.1007/978-3-031-78980-9_10 (DOI)001447234300010 ()2-s2.0-85219193083 (Scopus ID)9783031789793 (ISBN)9783031789809 (ISBN)
Konferanse
27th International Conference on Discovery Science, Pisa, Italy, October 14-16, 2024
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Merknad

D was supported by the EU H2020I CT48 project “TAILOR” under contract #952215. This research received funding from the Flemish Government under the “Onderzoeks programma Artificiële Intelligentie (AI) Vlaanderen” programme. LDR is also supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Tilgjengelig fra: 2025-04-15 Laget: 2025-04-15 Sist oppdatert: 2025-04-15bibliografisk kontrollert
Hazra, R., Venturato, G., Zuidberg dos Martires, P. & De Raedt, L. (2025). Can Large Language Models Reason? A Characterization via 3-SAT. In: : . Paper presented at 13th International Conference on Learning Representations (ICLR 2025), Singapore, April 24-28, 2025.
Åpne denne publikasjonen i ny fane eller vindu >>Can Large Language Models Reason? A Characterization via 3-SAT
2025 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Large Language Models (LLMs) have been touted as AI models possessing advanced reasoning abilities. However, recent works have shown that LLMs often bypass true reasoning using shortcuts, sparking skepticism. To study the reasoning capabilities in a principled fashion, we adopt a computational theory perspective and propose an experimental protocol centered on 3-SAT – the prototypical NP-complete problem lying at the core of logical reasoning and constraint satisfaction tasks. Specifically, we examine the phase transitions in random 3-SAT and characterize the reasoning abilities of LLMs by varying the inherent hardness of the problem instances. Our experimental evidence shows that LLMs are incapable of performing true reasoning, as required for solving 3-SAT problems. Moreover, we observe significant performance variation based on the inherent hardness of the problems – performing poorly on harder instances and vice versa. Importantly ,we show that integrating external reasoners can considerably enhance LLM performance. By following a principled experimental protocol, our study draws concrete conclusions and moves beyond the anecdotal evidence often found in LLM reasoning research.

HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-123280 (URN)10.48550/arXiv.2408.07215 (DOI)
Konferanse
13th International Conference on Learning Representations (ICLR 2025), Singapore, April 24-28, 2025
Merknad

Published at ICLR 2025 Workshop on Reasoning and Planning for LLMs

Tilgjengelig fra: 2025-09-01 Laget: 2025-09-01 Sist oppdatert: 2025-09-01bibliografisk kontrollert
Debot, D., Venturato, G., Marra, G. & De Raedt, L. (2025). Neurosymbolic Reinforcement Learning: Playing MiniHack With Probabilistic Logic Shields. In: Walsh, T; Shah, J; Kolter, Z (Ed.), Proceedings of the AAAI Conference on Artificial Intelligence: . Paper presented at 39th AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, February 25 - March 4, 2025 (pp. 29631-29633). AAAI Press
Åpne denne publikasjonen i ny fane eller vindu >>Neurosymbolic Reinforcement Learning: Playing MiniHack With Probabilistic Logic Shields
2025 (engelsk)Inngår i: Proceedings of the AAAI Conference on Artificial Intelligence / [ed] Walsh, T; Shah, J; Kolter, Z, AAAI Press, 2025, s. 29631-29633Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Probabilistic logic shields integrate deep reinforcement learning (RL) with probabilistic logic reasoning to train agents that operate in uncertain environments while giving strong guarantees with respect to logical constraints, such as safety properties. In this demo paper, we introduce a codebase that streamlines the design of custom MiniHack environments where neurosymbolic RL agents leverage probabilistic logic shields to learn safe and interpretable policies with strong guarantees. Our framework allows expert users to easily define and train agents that integrate deep neural policies with probabilistic logic in arbitrarily complex games: from simple exploration to planning and interacting with enemies. Additionally, we provide a web-based platform that showcases our application, offering an interactive interface for the broader community to experiment with and explore the capabilities of neurosymbolic reinforcement learning. This lowers the barrier for researchers and developers, making it accessible for a wider audience to engage with safety-critical RL scenarios.

sted, utgiver, år, opplag, sider
AAAI Press, 2025
Serie
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; Vol. 39, no 28
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-122622 (URN)10.1609/aaai.v39i28.35349 (DOI)001477477000212 ()9781577358978 (ISBN)
Konferanse
39th AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, February 25 - March 4, 2025
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon Europe, 101142702
Merknad

DD is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen, 1185125N). This research has also received funding from the 1185125N Leuven Research Funds (STG/22/021, CELSA/24/008, C14/24/092), from the Flemish Government under the "Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen" programme, from the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and from the European Research Council (ERC) under the European Union's Horizon Europe research and innovation programme (grant agreement no101142702). 

Tilgjengelig fra: 2025-08-01 Laget: 2025-08-01 Sist oppdatert: 2025-08-01bibliografisk kontrollert
De Smet, L., Venturato, G., De Raedt, L. & Marra, G. (2025). Relational Neurosymbolic Markov Models. In: Walsh, T; Shah, J; Kolter, Z (Ed.), Proceedings of the AAAI Conference on Artificial Intelligence: . Paper presented at 39th AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, February 25 - March 4, 2025 (pp. 16181-16189). AAAI Press, 39:15
Åpne denne publikasjonen i ny fane eller vindu >>Relational Neurosymbolic Markov Models
2025 (engelsk)Inngår i: Proceedings of the AAAI Conference on Artificial Intelligence / [ed] Walsh, T; Shah, J; Kolter, Z, AAAI Press, 2025, Vol. 39:15, s. 16181-16189Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Sequential problems are ubiquitous in AI, such as in reinforcement learning or natural language processing. State-of-the-art deep sequential models, like transformers, excel in these settings but fail to guarantee the satisfaction of constraints necessary for trustworthy deployment. In contrast, neurosymbolic AI (NeSy) provides a sound formalism to enforce constraints in deep probabilistic models but scales exponentially on sequential problems. To overcome these limitations, we introduce relational neurosymbolic Markov models (NeSy-MMs), a new class of end-to-end differentiable sequential models that integrate and provably satisfy relational logical constraints. We propose a strategy for inference and learning that scales on sequential settings, and that combines approximate Bayesian inference, automated reasoning, and gradient estimation. Our experiments show that NeSy-MMs can solve problems beyond the current state-of-the-art in neurosymbolic AI and still provide strong guarantees with respect to desired properties. Moreover, we show that our models are more interpretable and that constraints can be adapted at test time to out-of-distribution scenarios.

Code - https://github.com/ML-KULeuven/nesy-mm

Extended version - https://arxiv.org/abs/2412.13023

sted, utgiver, år, opplag, sider
AAAI Press, 2025
Serie
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; Vol. 39, no 15
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-122560 (URN)10.1609/aaai.v39i15.33777 (DOI)001477532300102 ()9781577358978 (ISBN)
Konferanse
39th AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, February 25 - March 4, 2025
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon Europe, 101142702
Merknad

This research has also received funding from the KU Leuven Research Funds (C14/24/092, STG/22/021), from thFlemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme, from the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement n°101142702).

Tilgjengelig fra: 2025-07-30 Laget: 2025-07-30 Sist oppdatert: 2025-07-30bibliografisk kontrollert
Maene, J. & De Raedt, L. (2025). The Gradient of Algebraic Model Counting. In: Toby Walsh; Julie Shah; Zico Kolter (Ed.), Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence: AAAI-25 Technical Tracks 18. Paper presented at 39th AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, February 25 - March 4, 2025 (pp. 19367-19377). AAAI Press, 39
Åpne denne publikasjonen i ny fane eller vindu >>The Gradient of Algebraic Model Counting
2025 (engelsk)Inngår i: Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence: AAAI-25 Technical Tracks 18 / [ed] Toby Walsh; Julie Shah; Zico Kolter, AAAI Press, 2025, Vol. 39, s. 19367-19377Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Algebraic model counting unifies many inference tasks on logic formulas by exploiting semirings. Rather than focusing on inference, we consider learning, especially in statistical-relational and neurosymbolic AI, which combine logical, probabilistic and neural representations. Concretely, we show that the very same semiring perspective of algebraic model counting also applies to learning. This allows us to unify various learning algorithms by generalizing gradients and backpropagation to different semirings. Furthermore, we show how cancellation and ordering properties of a semiring can be exploited for more memory-efficient backpropagation. This allows us to obtain some interesting variations of state-of-the-art gradient-based optimisation methods for probabilistic logical models. We also discuss why algebraic model counting on tractable circuits does not lead to more efficient second-order optimization. Empirically, our algebraic backpropagation exhibits considerable speed-ups as compared to existing approaches.

Code - https://github.com/ML-KULeuven/amc-grad

sted, utgiver, år, opplag, sider
AAAI Press, 2025
Serie
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; Vol. 39, no 18
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-122715 (URN)10.1609/aaai.v39i18.34132 (DOI)001477525800081 ()9781577358978 (ISBN)
Konferanse
39th AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, February 25 - March 4, 2025
Forskningsfinansiär
EU, Horizon 2020, 101142702Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Merknad

This research received funding from the Flemish Government (AI Research Program), the Flanders Research Foundation (FWO) under project G097720N, KUL Research Fund iBOF/21/075, and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 101142702). Luc De Raedt is also supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Tilgjengelig fra: 2025-08-13 Laget: 2025-08-13 Sist oppdatert: 2025-08-13bibliografisk kontrollert
Zuidberg dos Martires, P., Derkinderen, V., De Raedt, L. & Krantz, M. (2024). Automated Reasoning in Systems Biology: A Necessity for Precision Medicine. In: Pierre Marquis; Magdalena Ortiz; Maurice Pagnucco (Ed.), Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning: . Paper presented at 21st International Conference on Principles of Knowledge Representation and Reasoning (KR 2024), Hanoi, Vietnam, November 2-8, 2024 (pp. 974-980). AAAI Press
Åpne denne publikasjonen i ny fane eller vindu >>Automated Reasoning in Systems Biology: A Necessity for Precision Medicine
2024 (engelsk)Inngår i: Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning / [ed] Pierre Marquis; Magdalena Ortiz; Maurice Pagnucco, AAAI Press, 2024, s. 974-980Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Recent developments in AI have reinvigorated pursuits to advance the (life) sciences using AI techniques, thereby creating a renewed opportunity to bridge different fields and find synergies. Headlines for AI and the life sciences have been dominated by data-driven techniques, for instance, to solve protein folding with next to no expert knowledge. In contrast to this, we argue for the necessity of a formal representation of expert knowledge -- either to develop explicit scientific theories or to compensate for the lack of data. Specifically, we argue that the fields of knowledge representation (KR) and systems biology (SysBio) exhibit important overlaps that have been largely ignored so far. This, in turn, means that relevant scientific questions are ready to be answered using the right domain knowledge (SysBio), encoded in the right way (SysBio/KR), and by combining it with modern automated reasoning tools (KR). Hence, the formal representation of domain knowledge is a natural meeting place for SysBio and KR. On the one hand, we argue that such an interdisciplinary approach will advance the field SysBio by exposing it to industrial-grade reasoning tools and thereby allowing novel scientific questions to be tackled. On the other hand, we see ample opportunities to move the state-of-the-art in KR by tailoring KR methods to the field of SysBio, which comes with challenging problem characteristics, e.g., scale, partial knowledge, noise, or sub-symbolic data. We stipulate that this proposed interdisciplinary research is necessary to attain a prominent long-term goal in the health sciences: precision medicine.

sted, utgiver, år, opplag, sider
AAAI Press, 2024
Serie
Proceedings of the Conference on Principles of Knowledge Representation and Reasoning (KR), ISSN 2334-1025, E-ISSN 2334-1033
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-117556 (URN)10.24963/kr.2024/91 (DOI)2-s2.0-85213784341 (Scopus ID)9781956792058 (ISBN)
Konferanse
21st International Conference on Principles of Knowledge Representation and Reasoning (KR 2024), Hanoi, Vietnam, November 2-8, 2024
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg FoundationEU, Horizon 2020, #952215Knowledge Foundation, 20200017Örebro University
Merknad

This work was supported by the Wallenberg AI Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, by the EU H2020 ICT48 project “TAILOR” under contract #952215, and by the Exploring Inflammation in Health and Disease (X-HiDE) Consortium, which is a strategic research profile at Örebro University supported by the Knowledge Foundation (20200017), and by strategic grants from Örebro University.

Tilgjengelig fra: 2024-12-03 Laget: 2024-12-03 Sist oppdatert: 2025-02-04bibliografisk kontrollert
Abraham, S. S., Alirezaie, M. & De Raedt, L. (2024). CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments. In: 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings: . Paper presented at Joint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024, Torino, Italy, May 20-25, 2024 (pp. 3297-3313). European Language Resources Association (ELRA)
Åpne denne publikasjonen i ny fane eller vindu >>CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments
2024 (engelsk)Inngår i: 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings, European Language Resources Association (ELRA) , 2024, s. 3297-3313Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The integration of learning and reasoning is high on the research agenda in AI. Nevertheless, there is only a little attention to use existing background knowledge for reasoning about partially observed scenes to answer questions about the scene. Yet, we as humans use such knowledge frequently to infer plausible answers to visual questions (by eliminating all inconsistent ones). Such knowledge often comes in the form of constraints about objects and it tends to be highly domain or environment-specific. We contribute a novel benchmark called CLEVR-POC for reasoning-intensive visual question answering (VQA) in partially observable environments under constraints. In CLEVR-POC, knowledge in the form of logical constraints needs to be leveraged to generate plausible answers to questions about a hidden object in a given partial scene. For instance, if one has the knowledge that all cups are colored either red, green or blue and that there is only one green cup, it becomes possible to deduce the color of an occluded cup as either red or blue, provided that all other cups, including the green one, are observed. Through experiments, we observe that the low performance of pre-trained vision language models like CLIP (≈ 22%) and a large language model (LLM) like GPT-4 (≈ 46%) on CLEVR-POC ascertains the necessity for frameworks that can handle reasoning-intensive tasks where environment-specific background knowledge is available and crucial. Furthermore, our demonstration illustrates that a neuro-symbolic model, which integrates an LLM like GPT-4 with a visual perception network and a formal logical reasoner, exhibits exceptional performance on CLEVR-POC.

sted, utgiver, år, opplag, sider
European Language Resources Association (ELRA), 2024
Emneord
LLM and Reasoning, logical constraints, partial observability, visual question answering, Computational linguistics, Visual languages, Background knowledge, Language model, Large language model and reasoning, Partially observable environments, Performance, Question Answering, Research agenda, Knowledge management
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-118582 (URN)2-s2.0-85195916891 (Scopus ID)9782493814104 (ISBN)
Konferanse
Joint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024, Torino, Italy, May 20-25, 2024
Tilgjengelig fra: 2025-01-16 Laget: 2025-01-16 Sist oppdatert: 2025-01-16bibliografisk kontrollert
Zuidberg dos Martires, P., De Raedt, L. & Kimmig, A. (2024). Declarative probabilistic logic programming in discrete-continuous domains. Artificial Intelligence, 337, Article ID 104227.
Åpne denne publikasjonen i ny fane eller vindu >>Declarative probabilistic logic programming in discrete-continuous domains
2024 (engelsk)Inngår i: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 337, artikkel-id 104227Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Over the past three decades, the logic programming paradigm has been successfully expanded to support probabilistic modeling, inference and learning. The resulting paradigm of probabilistic logic programming (PLP) and its programming languages owes much of its success to a declarative semantics, the so-called distribution semantics. However, the distribution semantics is limited to discrete random variables only. While PLP has been extended in various ways for supporting hybrid, that is, mixed discrete and continuous random variables, we are still lacking a declarative semantics for hybrid PLP that not only generalizes the distribution semantics and the modeling language but also the standard inference algorithm that is based on knowledge compilation. We contribute the measure semantics together with the hybrid PLP language DC-ProbLog (where DC stands for distributional clauses) and its inference engine infinitesimal algebraic likelihood weighting (IALW). These have the original distribution semantics, standard PLP languages such as ProbLog, and standard inference engines for PLP based on knowledge compilation as special cases. Thus, we generalize the state of the art of PLP towards hybrid PLP in three different aspects: semantics, language and inference. Furthermore, IALW is the first inference algorithm for hybrid probabilistic programming based on knowledge compilation.

sted, utgiver, år, opplag, sider
Elsevier, 2024
Emneord
Probabilistic programming, Declarative semantics, Discrete-continuous distributions, Likelihood weighting, Logic programming, Knowledge compilation, Algebraic model counting
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-116987 (URN)10.1016/j.artint.2024.104227 (DOI)001331460000001 ()2-s2.0-85205363740 (Scopus ID)
Tilgjengelig fra: 2024-10-24 Laget: 2024-10-24 Sist oppdatert: 2024-10-24bibliografisk kontrollert
Marra, G., Dumancic, S., Manhaeve, R. & De Raedt, L. (2024). From statistical relational to neurosymbolic artificial intelligence: A survey. Artificial Intelligence, 328, Article ID 104062.
Åpne denne publikasjonen i ny fane eller vindu >>From statistical relational to neurosymbolic artificial intelligence: A survey
2024 (engelsk)Inngår i: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 328, artikkel-id 104062Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence. Neurosymbolic artificial intelligence (NeSy) studies the integration of symbolic reasoning and neural networks, while statistical relational artificial intelligence (StarAI) focuses on integrating logic with probabilistic graphical models. This survey identifies seven shared dimensions between these two subfields of AI. These dimensions can be used to characterize different NeSy and StarAI systems. They are concerned with (1) the approach to logical inference, whether model or proofbased; (2) the syntax of the used logical theories; (3) the logical semantics of the systems and their extensions to facilitate learning; (4) the scope of learning, encompassing either parameter or structure learning; (5) the presence of symbolic and subsymbolic representations; (6) the degree to which systems capture the original logic, probabilistic, and neural paradigms; and (7) the classes of learning tasks the systems are applied to. By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning.

sted, utgiver, år, opplag, sider
Elsevier, 2024
Emneord
Neurosymbolic AI, Statistical relational AI, Learning and reasoning, Probabilistic logics
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-112584 (URN)10.1016/j.artint.2023.104062 (DOI)001173882500001 ()2-s2.0-85183330773 (Scopus ID)
Forskningsfinansiär
EU, Horizon 2020Wallenberg AI, Autonomous Systems and Software Program (WASP)
Merknad

This work has received funding from the Research Foundation-Flanders (FWO) (G. Marra: 1239422N, S. Dumanˇci´c: 12ZE520N, R. Manhaeve: 1S61718N). Luc De Raedt has received funding from the Flemish Government (AI Research Program), from the FWO, from the KU Leuven Research Fund (C1418062), from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 694980 SYNTH: Synthesising Inductive Data Models) and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. This work was also supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.

Tilgjengelig fra: 2024-03-25 Laget: 2024-03-25 Sist oppdatert: 2024-03-25bibliografisk kontrollert
Venturato, G., Derkinderen, V., Zuidberg dos Martires, P. & De Raedt, L. (2024). Inference and Learning in Dynamic Decision Networks Using Knowledge Compilation. In: Michael Wooldridge; Jennifer Dy; Sriraam Natarajan (Ed.), Proceedings of the 38th AAAI Conference on Artificial Intelligence: . Paper presented at 38th AAAI Conference on Artificial Intelligence (AAAI) / 36th Conference on Innovative Applications of Artificial Intelligence / 14th Symposium on Educational Advances in Artificial Intelligence, Vancouver, Canada, February 20-27, 2024 (pp. 20567-20576). AAAI Press, 38
Åpne denne publikasjonen i ny fane eller vindu >>Inference and Learning in Dynamic Decision Networks Using Knowledge Compilation
2024 (engelsk)Inngår i: Proceedings of the 38th AAAI Conference on Artificial Intelligence / [ed] Michael Wooldridge; Jennifer Dy; Sriraam Natarajan, AAAI Press, 2024, Vol. 38, s. 20567-20576Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Decision making under uncertainty in dynamic environments is a fundamental AI problem in which agents need to determine which decisions (or actions) to make at each time step to maximise their expected utility. Dynamic decision networks (DDNs) are an extension of dynamic Bayesian networks with decisions and utilities, and can be used to compactly represent Markov decision processes (MDPs). We propose a novel algorithm called mapl-cirup that leverages knowledge compilation techniques developed for (dynamic) Bayesian networks to perform inference and gradient-based learning in DDNs. Specifically, we knowledge-compile the Bellman update present in DDNs into dynamic decision circuits and evaluate them within an (algebraic) model counting framework. In contrast to other exact symbolic MDP approaches, we obtain differentiable circuits that enable gradient-based parameter learning.

sted, utgiver, år, opplag, sider
AAAI Press, 2024
Serie
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; 38:18
HSV kategori
Identifikatorer
urn:nbn:se:oru:diva-115497 (URN)10.1609/aaai.v38i18.30042 (DOI)001241509500088 ()2-s2.0-85189535865 (Scopus ID)9781577358879 (ISBN)
Konferanse
38th AAAI Conference on Artificial Intelligence (AAAI) / 36th Conference on Innovative Applications of Artificial Intelligence / 14th Symposium on Educational Advances in Artificial Intelligence, Vancouver, Canada, February 20-27, 2024
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Merknad

This work was supported by the KU Leuven Research Fund (C14/18/062), the Research Foundation-Flanders (FWO, 1SA5520N), the Flemish Government under the “Onder-zoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme, the EU H2020 ICT48 project “TAILOR” under contract #952215, and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Tilgjengelig fra: 2024-08-21 Laget: 2024-08-21 Sist oppdatert: 2024-08-21bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-6860-6303