Abstract
HQ-learning is a hierarchical extension of Q(λ)-learning designed to solve certain types of partially observable Markov decision problems (POMDPs). HQ automatically decomposes POMDPs into sequences of simpler subtasks that can be solved by memoryless policies learnable by reactive subagents. HQ can solve partially observable mazes with more states than those used in most previous POMDP work.
Original language | English (US) |
---|---|
Pages (from-to) | 219-246 |
Number of pages | 28 |
Journal | Adaptive Behavior |
Volume | 6 |
Issue number | 2 |
DOIs | |
State | Published - Jan 1 1997 |
Externally published | Yes |
ASJC Scopus subject areas
- Behavioral Neuroscience
- Experimental and Cognitive Psychology