Holographic Reduced Representations for Dimensional Attention Learning
Holographic Reduced Representations for Dimensional Attention Learning
No Thumbnail Available
Date
2020-02-03
Authors
Wang, Huizhi
Journal Title
Journal ISSN
Volume Title
Publisher
Middle Tennessee State University
Abstract
One aspect of machine learning is that it has sought to mimic how the brain learns. In 1992, Kruschke published a computer model called ALCOVE which sought to model category learning. In 2004, Phillips and Noelle made this model more realistic and closer to human brain activity by incorporating Temporal Difference (TD) Learning. However, this model is currently incompatible with readily available encoding techniques like Holographic Reduced Representations (HRRs) used in related cognitive architectures. In this study, the standard ALCOVE mapping function is replaced with the convolution method employed by the HRRs. The original function learned in classic 6-type category learning tasks consists of only three dimensions, but other tasks may require more dimensions. This study empirically demonstrates that convolution could compress more features into the learning procedure. It also empirically demonstrates this method to be valid for dimensional attention learning by replicating ALCOVE performance on category learning tasks with binary, separable and integral feature tasks.