Having a robot that can learn from and improve upon a human demonstration is a challenge for robotic scientists, and useful for non-engineers who want a robotic assistant to perform a particular task. In this paper we address some of the difficulties one will have to overcome when developing such a system for an articulated manipulator with more degrees-offreedom (d.o.f.) than most mobile robots on wheels. Making a good data capture of what is shown to the robot is one such problem. Another key scientific challenge is the curse of dimensionality that arises from the high dimensional state and action spaces in this application, which we propose to address by combination of supervised and reinforcement learning to gain benefits from both paradigms. We also point out that one has to be careful when trying to obtain an agent that learns a task in as few trials as possible, since it might require much more computational time.