Open this publication in new window or tab >>2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]
Reasoning and decision-making are foundational challenges in artificial intelligence (AI). These processes are closely linked – an intelligent agent must reason about its environment and goals in order to make decisions and select actions. Two principal frameworks for sequential decision-making are AI planning and reinforcement learning (RL). Planning assumes access to a known model of the environment and uses symbolic representations to compute a sequence of actions that leads from an initial state to a desired goal. In contrast, RL focuse son learning behavior through interaction, enabling agents to develop policies that maximize long-term reward under uncertainty. Despite methodological differences, both approaches aim to generate intelligent, goal-directed action sequences.
The rise of Large Language Models (LLMs) has sparked significant interest in their potential to perform reasoning, planning, and decision-making tasks. Despite their impressive performance in natural language understanding and generalization, there is growing skepticism about whether LLMs genuinely reason or merely leverage statistical correlations. This dissertation investigates this question through a principled evaluation grounded in computational theory, using 3-SAT – the canonical NP-complete problem – as a testbed. The findings demonstrate that LLMs fail to exhibit sound and complete reasoning, especially on complex instances where shallow heuristics fail, and that their apparent reasoning abilities often stem from overfitting to statistical patterns.
To address these limitations, this dissertation proposes a range of neurosymbolic architectures that combine the generative flexibility of LLMs with the rigor and reliability of symbolic methods. Empirical evaluations across planning, reward design, and plan verification tasks show that such integration yields systems that are more robust and accurate. This work advances our theoretical and practical understanding of LLM-based reasoning, provides concrete design principles for neurosymbolic systems, and charts a path toward AI agents that integrate world knowledge with logical precision.
Place, publisher, year, edition, pages
Örebro: Örebro University, 2025. p. 67
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 106
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-122456 (URN)9789175296869 (ISBN)
Public defence
2025-10-17, Örebro universitet, Långhuset, Hörsal L2, Fakultetsgatan 1, Örebro, 13:00 (English)
Opponent
Supervisors
2025-07-222025-07-222025-09-04Bibliographically approved