In this article we suggest a framework for programming by demonstration of robotic grasping based on principles of the Mirror Neuron System (MNS) model. The approach uses a hand-state representation inspired by neurophysiological models of human grasping. We show that such a representation not only simplifies the grasp recognition but also preserves the essential part of the reaching motion associated with the grasp. We show that if the hand state trajectory of a demonstration can be reconstructed, the robot is able to replicate the grasp. This can be done using motion primitives, derived by fuzzy time-clustering from the demonstrated reach-and grasp motions. To illustrate the approach we show how human demonstrations of cylindrical grasps can be modeled, interpreted and replicated by a robot in this framework.