This paper presents a novel approach to skill modeling acquired from human demonstration. The approach is based on fuzzy modeling and is using a planner for generating corresponding robot trajectories. One of the main challenges stems from the morphological differences between human and robot hand/arm structure, which makes direct copying of human motions impossible in the general case. Thus, the planner works in hand state space, which is defined such that it is perception-invariant and valid for both human and robot hand. We show that this representation simplifies task reconstruction and preserves the essential parts of the task as well as the coordination between reaching and grasping motion. We also show how our approach can generalize observed trajectories based on multiple demonstrations and that the robot can match a demonstrated behavoir, despite morphological differences. To validate our approach we use a general-purpose robot manipulator equipped with an anthropomorphic three-fingered robot hand.