Probabilistic graphical models (PGMs) have been popular in artificial intelligence and machine learning ever since they were introduced by Judea Pearl in the late 80s. Today, they enable numerous applications in domains ranging from robotics to natural language processing and bio-informatics. However, PGMs essentially define a joint probability distribution of a fixed and finite set of variables, and in this way, they are akin to a propositional logic. Since the early 90s, researchers have contributed many formalisms for making PGMs more expressive, and able to naturally deal with a variable number of objects as well as the relationships amongst them, just like first-order logic. Researchers gathered in a field that became known as statistical relational learning (SRL) [1], [2] and pursued the quest for the ultimate “universal” representation and accompanying inference and learning procedures. A good overview of this productive research period is given in [3]. Russell distinguishes two types of probabilistic representations, the first of which defines a “possible world” semantics and the second a semantics that is based on “probabilistic execution traces”. The possible world semantics provides the underlying intuition for probabilistic logics such as Markov Logic [4], BLOG [5], probabilistic logic programming under Sato’s distribution semantics [6], and probabilistic databases [7], the trace semantics for probabilistic programming (PP) languages such as IBAL [8], Church [9] and Anglican [10].