Home About

In computational neuroscience, what is an integrate-and-fire neuron? How does it differ from a real neuron?

Jan 27, 2016


Answer

History

An integrate-and-fire neuron, also known as a "leaky integrate-and-fire (LIF) model", is the simplest reasonable neuron model. It's so simple, in fact, that a basic version of it appeared in a 1907 (!) paper by French physiologist Louis Lapicque. I highly recommend you check out Nicolas Brunel and Mark van Rossum's summary of the paper, published 100 years later, for some neat neuroscience trivia. They also trace the re-discovery and development of the LIF model beginning in the 1960s. Larry Abbott also wrote a piece on the Lapicque Model, wherein he takes heart from Lapicque's success. After all, if someone who had no idea that action potentials exist could make modestly correct predictions, then we need not fear the limitations of our still-primitive apparatuses as we engage in theoretical neuroscience!

The LIF model was so easily deducible because it only requires a few basic facts about nerve cells: they have membranes, they're semipermeable, and they are polarizable. This is, in fact, enough to deduce the equivalent circuit of the membrane potential: an RC, or resistor-capacitor, circuit. Such circuits charge up slowly when presented with a current, then slowly discharge.

We can formalize this description of an RC circuit (and so arrive at a leaky integrator) with recourse to some basic electronics and calculus.

Make sure you're very familiar with the equivalent circuit for the neuron before proceeding!

Deriving the LIF from the Equivalent Circuit

The circuit in question has a resistor \(R\), corresponding to the leak ion channels, and a capacitor \(C\), corresponding to the cell membrane. We want to know what happens when a current \(I\) is injected into this circuit. That current is either injected by an experimenting neuroscientist or by synaptic transmission. Since our model only has two places for current to go, our first equation is super simple:

$$ I(t) = I_R + I_C $$

where \(t\) is time, \(I_R\) is current through the resistor, and \(I_C\) is current through the capacitor. We can find \(I_R\) using our trusty Ohm's Law, which describes currents across resistors:

$$ I_R = \frac{V}{R} $$

where \(V\) is the voltage across the membrane. Figuring out the capacitor current is a bit trickier: we don't use the capacitor equations as much as we do Ohm's law! But capacitors are actually kinda simple. They have a capacity \(C\), to store charge, \(q\), and they store more charge as you increase the voltage across them. Written out in math instead of words, that looks like:

$$ V*C = q $$

But we want to know what happens when the charge is changing – when there's a current. That means we need to look at changes over time, or time derivatives. Put another way, when we change charge (\(dq\)) as we change time (\(dt\)), we get current (\(I\)):

$$ \frac{dV}{dt}*C = I_C = \frac{dq}{dt} $$

Notice that we had to take time derivatives on both sides of the equation! Let's put our two pieces into the original equation.

$$ I(t) = \frac{V(t)}{R} + \frac{dV}{dt}*C $$

Let's rearrange a few terms. In particular, I'm going to call \(R*C\) something else. Most people use the Greek letter \(\tau\), and they call this number the time constant. Why? Let's see!

$$ \tau*\frac{dV}{dt} = -V(t) + R*I(t) $$

This is a "differential equation", which means an equation with a derivative as the solution. These equations can be pretty tough, even unsolvable, but this one is fairly simple. Let's analyze this one by comparing its responses to a real neuron's.

Similarities to a Real Neuron

This model has a "resting membrane potential" – a voltage where \(\frac{dV}{dt}\) is \(0\). If there's no current, that resting potential is \(0\). See for yourself: \(I(t)\) is \(0\) and \(V(t)\) is \(0\), so the right side is all \(0\). That must mean that the left side, the change in voltage over time, is \(0\) too! So if we're at \(0\), we'll stay there. If we go away from \(0\), the minus sign on \(V(t)\) will make the right side of the equation have the opposite sign from the direction we went. That means that the change in voltage will go in the opposite direction, pulling us negative if we went positive and vice versa. So 0 is what we call a "stable equilibrium potential": a resting potential.

Let's get back to \(\tau\). The bigger that \(\tau\) is, the bigger the current has to be in order to get a given \(\frac{dV}{dt}\). That means that the bigger the \(\tau\), the longer it takes for our neuron to respond to an input. So the harder it is to cross the membrane (\(R\)), and the more charge we can store in the capacitor (\(C\)), the longer it takes for a neuron to reach its steady state when an input is given.

Differences from a Real Neuron

Anyone who has ever worked with a real neuron is probably scratching their head in confusion, if not fuming in anger, at this point. "This leaky-integrator thingamajig is nothing like a real neuron! It doesn't even spike!"

This is fixed in a fairly arbitrary way: whenever the voltage reaches some value, we declare "and lo, an action potential was generated" and reset the voltage to 0. This isn't as crazy as it seems – the action potential process is much faster than the changes in voltage due to synaptic currents. Doing so results in a neuron with a firing rate that scales linearly with voltage, in rough agreement with experiments.

This agreement breaks down when large currents are injected. Of course, at a certain point, a real neuron is going to explode, and that's not something we've built our model to do. Even before that regime, however, there are problems for our model. This is primarily due to the absolute refractory period, which is caused by the fast-inactivating voltage-gated sodium channels, which are a bit more reasonable to include in a model than electrocution. The relatively simple step of stopping the change in voltage for a brief period after it is reset to 0 following a spike goes a long way to closing this gap.

But the real killer for this model comes when we move from examining currents injected experimentally to synaptic currents. The trouble is, synaptic transmission doesn't directly cause a current. Instead, the downstream result of a synaptic event is the opening of ligand-gated ion channels (ignoring metabotropic receptors). These channels change the resistance of the cell, and they introduce another term into the right side of the equation, \(\left(V(t)-E_i \right)\), where \(E_i\) is the Nernst potential of the channel.

This small difference can lead to very different properties. For example, it leads to non-linear interactions between inputs, like shunting inhibition. These interactions qualitatively change the kinds of computations that a neuron can perform.

And that's just the beginning! More sophisticated neuron models can spike properly, oscillate and ring, and even burst! The most fiduciary models include separate biochemical compartments, or regions of the cell where currents and even protein networks can act locally.

Of course, such models are completely intractable mathematically – studying them involves experiments, which are less wet but no less time-consuming and difficult than "real lab work". The magic of modeling lies not in coming up with the closet match to reality, no matter how complicated, but in stripping away everything but the necessary pieces. From another perspective, the hard work is in figuring out what questions a model can answer, rather than what it gets wrong.

References

  • Brunel N and Van Rossum M, 2007. Lapicque’s 1907 paper: from frogs to integrate-and-fire, Biol Cybern.

  • Abbott L, 1999. Lapicque’s introduction of the integrate-and-fire model neuron (1907), Brain Res Bull.

  • Gerstner and Kistler, 2002. Spiking Neuron Models, Section 4.1: Integrate-and-fire model.