On the Hardness of Probabilistic Neurosymbolic Learning
2024 (English)In: Proceedings of Machine Learning Research, ML Research Press , 2024, p. 34203-34218Conference paper, Published paper (Refereed)
Abstract [en]
The limitations of purely neural learning have sparked an interest in probabilistic neurosymbolic models, which combine neural networks with probabilistic logical reasoning. As these neurosymbolic models are trained with gradient descent, we study the complexity of differentiating probabilistic reasoning. We prove that although approximating these gradients is intractable in general, it becomes tractable during training. Furthermore, we introduce WeightME, an unbiased gradient estimator based on model sampling. Under mild assumptions, WeightME approximates the gradient with probabilistic guarantees using a logarithmic number of calls to a SAT solver. Lastly, we evaluate the necessity of these guarantees on the gradient. Our experiments indicate that the existing biased approximations indeed struggle to optimize even when exact solving is still feasible.
Place, publisher, year, edition, pages
ML Research Press , 2024. p. 34203-34218
Keywords [en]
Adversarial machine learning, Contrastive Learning, Probabilistic logics, Biased approximation, Gradient estimator, Gradient-descent, Logical reasoning, Neural learning, Neural-networks, Probabilistic guarantees, Probabilistic reasoning, Probabilistics, SAT solvers, Neural network models
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:oru:diva-118586Scopus ID: 2-s2.0-85203816760OAI: oai:DiVA.org:oru-118586DiVA, id: diva2:1928205
Conference
41st International Conference on Machine Learning, ICML 2024, Vienna, July 21-24, 2024
2025-01-162025-01-162025-01-16Bibliographically approved