For my major project I'am creating a Spiking-Neural Network (SNN) to control a robot simulation. The goal is to make the
robot do simple tasks such as following a wall without crashing into it. The configuration of the SNN will be evolved
using a Genetic Algorithm (GA) not made by me, since creating both a SNN and GA would take more time than I have.
SNNs are inspired by biology, and how the brains of humans and other animals work. Our brains is built up by billions of
neurons (nerve cells) that are connected to each other and is used to send small electrical spikes around, the neurons
are very similar to electrical gates found in computers.
The main parts of the neuron is the soma (cell body), axon, and dendrites. The soma is where you find the cell nucleus
and all the other parts neurons have in common with other cells. The axon is long cable like part of the cell where the
electrical impulses are sent to other neurons, the dendrites are the receiving end where the neuron receives spikes from
When a neuron receives a spikes it alters the membrane potential of the neuron, and when the membrane potential exceeds
a certain threshold it sends out a spike over the axon.
The connection between neurons occurs via synapses, synapses does more than transmit spikes between neurons, they can
excite or inhibit the spike, which means that the spike can either decrease or increase the membrane potential of the
The main difference between a SNN and the more commonly used Artificial Neural Network (ANN), is that SNNs it has more
resemblance to real neural network, it also interoperates time into the model. Spikes alters the membrane potential and
may cause it to fire but the effect of a spike deteriorates over time, this means that a neuron must receive a number of
spikes within the same time frame to make it spike.
There are several different models of SNNs, the different models will give have different levels of realism of how
similar they are to real neurons. For my implementation I have chosen to implement the Spiking-Response Model (SRM) and
the Integrate-and-Model (IFM). SO far I have only implemented the SRM model and have managed to use it to solve the XOR
problem. Since I am not at the stage yet where I can use the GA to evolve a SNN that can solve the problem I built one
myself that will solve the problem.
The number inside the neuron represents the threshold value that needs to be reached for it to spike, the number above
the connecting lines represents the weight and sign of the synapse. Say you have the input 1 and 0 that you want to XOR,
this input would cause Input A to spike, which sends a spike to Hidden A and B. The spike causes Hidden A to reach it
threshold which sends a spike to Output A, and you have your result. Hidden B does not spike because it has a higher
threshold, it would need a spike from both the inputs to spike. Hidden B has a inhibitory synaptic connection with
Output A, and it will lower the membrane potential of the neuron. So if the input to the network is 1 and 1 it will
cause Hidden B to spike, which will negate the spikes sent to Output A from Hidden A and C. If the input to the network
is 0 and 0, none of the Input neurons will spike, and the result will be 0.
For SNNs time is quite important, so it will actually not give a answer instantly. The SRM adds a delay to spikes, to
simulate the time it takes for a spike to travel. This is implemented such that the effect of a spike will be zero if
the spike is not older than a set constant. If this value is set to 2, it will take the network 4 iterations before the
output neurons spikes. This notion of time does not really suit the XOR problem well, so it might not be the best way to
test a network.
That's all about SNNs for now, more to come later when I have finished more.
 E. Di Paolo, “Spike-Timing dependent plasticity for evolved robots,” Adaptive
Behavior, vol. 10, no. 3-4, pp. 243–263, Jul. 2002. [Online]. Available:
 H. Hagras, A. Pounds-Cornish, M. Colley, V. Callaghan, and G. Clarke, “Evolving
spiking neural network controllers for autonomous robots,” in Robotics and
Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International
Conference on, vol. 5. IEEE, Apr. 2004, pp. 4620–4626 Vol.5. [Online]. Available:
 E. A. Di Paolo, “Evolving spike-timing-dependent plasticity for single-trial learning in
robots,” Philosophical Transactions of the Royal Society of London. Series A:
Mathematical, Physical and Engineering Sciences, vol. 361, no. 1811, pp. 2299–2319,
Oct. 2003. [Online]. Available:http://dx.doi.org/10.1098/rsta.2003.1256
 V. Trianni and S. Nolfi, “Engineering the evolution of Self-Organizing behaviors in
swarm robotics: A case study,” Artificial Life, vol. 17, no. 3, pp. 183–202, May 2011.
[Online]. Available: http://dx.doi.org/10.1162/artl a 00031
 S. Nolfi and D. Floreano, “Synthesis of autonomous robots through evolution,”
Trends in Cognitive Sciences, vol. 6, no. 1, pp. 31–37, Jan. 2002. [Online]. Available:
 J. Feng and D. Brown, “Integrate-and-fire models with nonlinear leakage,” vol. 62,
no. 3, pp. 467–481, 2000. [Online]. Available:
 D. Floreano and C. Mattiussi, “Evolution of spiking neural controllers for autonomous
Vision-Based robots,” in Evolutionary Robotics. From Intelligent Robotics to Artificial
Life, ser. Lecture Notes in Computer Science, T. Gomi, Ed. Berlin, Heidelberg:
Springer Berlin Heidelberg, Oct. 2001, vol. 2217, ch. 2, pp. 38–61. [Online]. Available: