Robotic manipulation requires a highly flexible and compliant system. Task-specific heuristics are usually not able to cope with the diversity of the world outside of specific assembly lines and cannot generalize well. The goal of this project is to explore advances in robotic perception and novel learning methods, especially deep reinforcement learning. These will provide a flexible way to cope with uncertainty and allow robots to explore their action space to solve a variety of different tasks. Additionally, it is our belief that compared to passive sensing, physical interaction with objects can enhance perception and ultimately ease manipulation in difficult surroundings. In particular, we aim at developing improved environment representations suitable for object manipulation. These will serve as input to learning algorithms enabling robots to autonomously acquire new skills through trial-and-error. Necessary algorithmic adjustments will be performed and the resulting method will be compared against existing approaches relying on simpler observations. Scientific findings in these experiments will be used to formulate novel interactive perception mechanisms. Developed methods will be evaluated on simulated and real world robots.