Abstract
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the right action, i.e. the action with the best possible improvement of the detector.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 3355-3362 |
Number of pages | 8 |
ISBN (Print) | 9781479914845 |
DOIs | |
State | Published - Sep 3 2014 |
Externally published | Yes |