Skip to content

alexandermichels/HTMResearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Honors Research on Cortical Learning Algorithms

A repository for files related to my Honors Research Thesis at Westminster College.

Table of Contents



Slides from PhDEconomics:

Introduction to ARMA Models from Wharton

ARMA Lection Notes from OSU


Numenta Research: Key Discoveries in Understanding How the Brain Works

Numenta Research: Key Discoveries in Understanding How the Brain Works

This video gives an excellent overview of the neocortex and gives an intuitive understanding of the theory.

Numenta's HTM School with Matt Taylor

Numenta's HTM School with Matt Taylor

HTM School provides a multipart overview of the various components of Hierarchical Temporal Memory.


Particle Swarm Optimization (PSO) Visualized

Particle Swarm Optimization (PSO) Visualized

Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling.

PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles.

Each particle keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This location is called lbest. when a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest.

The particle swarm optimization concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its pbest and lbest locations (local version of PSO). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and lbest locations.

In past several years, PSO has been successfully applied in many research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.

Another reason that PSO is attractive is that there are few parameters to adjust. One version, with slight variations, works well in a wide variety of applications. Particle swarm optimization has been used for approaches that can be used across a wide range of applications, as well as for specific applications focused on a specific requirement.

Read more on swarming

A great resource to learn about Cortical Learning Algorithms is of course, Numenta. They strongly advocate for open science and post their research papers as well as conference posters online at their website. Numenta also has a YouTube channel with lots of helpful resources and for a more gentle introduction, Numenta's Matt Taylor has an excellent YouTube channel called HTM School.

For a comprehensive list of papers and presentations, check the References section of my Honors Research draft, but here is a good list to get you started:

Releases

No releases published

Packages

No packages published

Languages