Future robots should follow human social norms to be useful and accepted in human society. In this paper, we show how prior knowledge about social norms, represented using an existing normative framework, can be used to (1) guide reinforcement learning agents towards normative policies, and (2) re-use (transfer) learned policies in novel domains. The proposed method is not dependent on a particular reinforcement learning algorithm and can be seen as a means to learn abstract procedural knowledge based on declarative domain-independent semantic specifications.