A Study on Abstract Policy for Acceleration of Reinforcement Learning

Ahmad Afif, Mohd Faudzi and Hirotaka, Takano and Junichi, Murata (2014) A Study on Abstract Policy for Acceleration of Reinforcement Learning. In: Proceedings of the SICE Annual Conference (SICE), 9-12 Sept. 2014 , Sapporo, Japan. pp. 1793-1798..

[img] PDF
A_Study_on_Abstract_Policy_for_Acceleration_of_Reinforcement_Learning.pdf - Published Version
Restricted to Repository staff only

Download (512kB) | Request a copy

Abstract

Reinforcement learning (RL) is well known as one of the methods that can be applied to unknown problems. However, because optimization at every state requires trial-and-error, the learning time becomes large when environment has many states. If there exist solutions to similar problems and they are used during the exploration, some of trial-anderror can be spared and the learning can take a shorter time. In this paper, the authors propose to reuse an abstract policy, a representative of a solution constructed by learning vector quantization (LVQ) algorithm, to improve initial performance of an RL learner in a similar but different problem. Furthermore, it is investigated whether or not the policy can adapt to a new environment while preserving its performance in the old environments. Simulations show good result in terms of the learning acceleration and the adaptation of abstract policy.

Item Type: Conference or Workshop Item (Other)
Uncontrolled Keywords: Abstraction; Prior information; Learning vector quantization; Q-learning
Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering
Faculty/Division: Faculty of Electrical & Electronic Engineering
Depositing User: Mrs. Neng Sury Sulaiman
Date Deposited: 01 Dec 2014 06:49
Last Modified: 19 Apr 2016 07:31
URI: http://umpir.ump.edu.my/id/eprint/7452
Download Statistic: View Download Statistics

Actions (login required)

View Item View Item