Ahmad Afif, Mohd Faudzi and Hirotaka, Takano and Junichi, Murata (2013) A study on Visual Abstraction for Reinforcement Learning Problem Using Learning Vector Quantization. In: Proceedings of SICE Annual Conference (SICE) , 14-17 Sept. 2013 , Nagoya, Japan. pp. 1326-1331..
PDF
A_study_on_Visual_Abstraction_for_Reinforcement_Learning_Problem_Using_Learning_Vector_Quantization.pdf - Published Version Restricted to Repository staff only Download (496kB) |
Abstract
When applying the learning systems to real-world problems, which have a lot of unknown or uncertain things, there are some issues that need to be solved. One of them is the abstraction ability. In reinforcement learning, to complete each task, the agent will learn to find the best policy. Nevertheless, if a different task is given, we cannot know for sure whether the acquired policy is still valid or not. However, if we can make an abstraction by extract some rules from the policy, it will be easier to understand and possible to apply the policy to different tasks. In this paper, we apply the abstraction at a perceptual level. In the first phase, an action policy is learned using Q-learning, and in the second phase, Learning Vector Quantization is used to extract information out of the learned policy. In this paper, it is verified that by applying the proposed abstraction method, a more useful and simpler representation of the learned policy can be achieved.
Item Type: | Conference or Workshop Item (Speech) |
---|---|
Uncontrolled Keywords: | Abstraction; Learning vector quantization; Reinforcement learning |
Subjects: | T Technology > TK Electrical engineering. Electronics Nuclear engineering |
Faculty/Division: | Faculty of Electrical & Electronic Engineering |
Depositing User: | Mrs. Neng Sury Sulaiman |
Date Deposited: | 28 Oct 2014 07:24 |
Last Modified: | 03 Mar 2015 09:33 |
URI: | http://umpir.ump.edu.my/id/eprint/6961 |
Download Statistic: | View Download Statistics |
Actions (login required)
View Item |