In this paper we present an approach to reproduce human demonstrations in a reach-to-grasp context. The demonstration is represented in hand state space. By using the distance to the target object as a scheduling variable, the way in which the robot approaches the object is controlled. The controller that we deploy to execute the motion is formulated as a nextstateplanner. The planner produces an action from the current state instead of planning the whole trajectory in advance which can be error prone in non-static environments. The results have a direct application in Programming-by-Demonstration. It also contributes to cognitive systems since the ability to reach-tograsp supports the development of cognitive abilities.