Hierarchical Temporal Memory

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Hierarchical Temporal Memory (HTM) is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex using an approach somewhat similar to Bayesian networks. HTM model is based on the memory-prediction theory of brain function described by Jeff Hawkins in his book On Intelligence. HTMs are claimed to be biomimetic models of cause inference in intelligence.

Jeff Hawkins states that HTM does not present any new idea or theory, but combines existing ideas to mimic the neocortex with the simplest design that provides the greatest range of capabilities. He stated this is similar to the Palm Pilot, a device he designed that became popular because of its particular blend of old features. Similarities to existing AI ideas are described in the December 2005 issue of the Artificial Intelligence journal. It is similar to work by Tomaso Poggio and David Mumford.

Contents

[edit] Description of HTM

This section is a summary Numenta's documentation.[1]

An HTM network is a stack of layers which are composed of nodes, with fewer nodes in "higher" layers. In sensing mode, sensory data comes into the bottom layer. In generating mode, data goes out the bottom layer. The top layer has the most general concepts which determine, or are determined by, smaller concepts in the lower layers which have more precision in time and space. In sensing mode, each layer categorizes information coming in from a lower layer into probabilities of the concepts it has in memory. Several concepts are stored in each node.

During training, a node receives an input of sequences of patterns, which are a sampling the possible sequences it will see in the future. The patterns are separated and stored into several groups based on how likely the patterns are to be near each other in the training sequence(s). Each pattern is assigned to one group of patterns so that there are fewer groups than patterns. Each group represents a single "concept" or "idea" (a.k.a. "cause" in HTM language or "name" in On Intelligence). The probability that a pattern belongs to a particular group is the measure of "belief" that the pattern belongs to that group.

In sensing mode, a node reads an input pattern at a step in time and measures its closeness to all the patterns stored in all the groups of that node. In one implementation, each group simply selects the highest closeness value measured out of its patterns as the probability (belief) that the input pattern matches the group. This probability value is passed to one or more nodes in the next higher layer of nodes, which is determined by the fixed "wiring" of the group to the receiving nodes. In a more general scheme, the group's probability value can be sent to any node(s) in any layer(s). Each node has several groups outputting their probability values to the input of other nodes, which are combined with group probability values from other nodes. These combinations form the new input patterns. So each node has several beliefs (probabilities of groups) that are passed to higher level nodes, which are compared to beliefs from other nodes.

Since there are fewer patterns than input possibilities and fewer groups than patterns, resolution in space and time is lost in each node. In this manner, higher-level nodes have fewer groups (concepts) that categorize the data to represent a larger range of space and time.

This overall design is meant to reflect the way the external world is organized. Larger concepts, causes, and objects are viewed by humans to change more slowly and consist of smaller concepts that change more quickly. Jeff Hawkins believes brains evolved this type of hierarchy to match, predict, and affect the external world's organization.

This section has not discussed feedback which is a critical part of HTM.

[edit] Comparing HTM and Neocortex

HTM attempts to achieve the overall methodology of the neocortex, but comparing them is difficult because the "wiring" of neurons is very complex and not understood very well. A single HTM node may represent anywhere from six to millions of neurons, which are organized in columns of 6 layers. The 6 layers in the neocortex should not be confused with "layers" in an HTM. Different areas of the neocortex correspond to HTM layers, where the frontal lobe is analogous to a higher HTM layer. In Jeff Hawkins' theory of the neocortex, some layer 1 neurons are analogous to the output of a node. Some layer 4, 5, 6 neurons are analogous to the input of a node. Some layers 2 and 3 are believed to be where the "concepts" of the neocortex reside.

[edit] Similarity to other models

[edit] Bayesian Networks

An HTM can be considered a form of Bayesian network where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input through a process of finding common spatial patterns and then finding common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data, and afford mechanisms for covert attention.

[edit] Neural Networks

Numenta's Director of Developer Services addressed how HTMs differ from neural networks.

First of all, HTM's are a type of neural network. But in saying that, you should know that there are many different types of neural networks (single layer feed-forward network, multi-layer network, recurrent, etc). 99% of these types of networks tend to emulate the neurons, yet don't have the overall infrastructure of the actual cortex. Additionally, neural networks tend not to deal with temporal data very well, they ignore the hierarchy in the brain, and use a different set of learning algorithms than our implementation. But, in a nutshell, HTMs are built according to biology. Whereas neural networks ignore the structure and focus on the emulation of the neurons, HTMs tend to focus on the structure and ignores the emulation of the neurons.

[edit] Implementation

The HTM idea has been implemented in a research release of a software API called "Numenta Platform for Intelligent Computing" (NuPIC). Currently, the software is available as a free download and can be licensed either for general research, or for academic research.

The implementation is written in C++ and Python.[citation needed]

[edit] See also

[edit] Related models

[edit] References

  1. ^ http://www.numenta.com/for-developers/education/general-overview-htm.php

[edit] External links

[edit] Official

[edit] Other

Personal tools
Languages