Posts

  • Gaussians as a Log-Linear Family

  • Short Paper on Square Roots and Critical Points

    In the next section, we define an analogous algorithm for finding critical points. That is, we again try to solve a root-finding problem with Newton-Raphson, but this introduces a division, which we reformulate as an optimization problem.

    Today a short paper I wrote posted to the arXiV. It’s on a cute connection between the algorithm I use to find the critical points of neural network losses and the algorithm used to compute square roots to high accuracy.

    Check out this Twitter thread for a layman-friendly explanation.

  • Tails You Win, One Tail You Lose

    Controversy over hypothesis testing methodology encountered in the wild a second time! At this year’s Computational and Systems Neuroscience conference, CoSyNE 2019, there was disagreement over whether the acceptance rates indicated bias against women authors. As it turns out, part of the disupute turned over which statistical test to run!

  • Multiplication Made Convoluted, Part II: Python
    import numpy as np
    
    class DecimalSequence():
    
        def __init__(self, iterable):
    
            arr = np.atleast_1d(np.squeeze(np.asarray(iterable, dtype=np.int)))
            self.arr = arr
    
        def multiply(self, other):
            return DecimalSequence(np.convolve(self.arr, other.arr))
    

  • Multiplication Made Convoluted, Part I: Math

  • Fréchet Derivatives 4: The Determinant

  • Fréchet Derivatives 3: Deep Linear Networks

  • Google Colab on Neural Networks

    The core ideas that make up neural networks are deceptively simple. The emphasis here is on deceptive.

    For a recent talk to a group of undergraduates interested in machine learning, I wrote a short tutorial on what I think are the core concepts needed to understand neural networks in such a way that they can be understood by someone with no more than high school mathematics and a passing familiarity with programming.

    Thanks to the power of the cloud, you can check out the tutorial and run the examples. Just check out this link. This time, I chose to use Google’s “Colaboratory”, which is like Google Drive for Jupyter notebooks.

  • Functors and Film Strips

    stack_smileys

  • Use You Jupyter Notebook For Great Good

    Binder

    As part of this year’s Data Science Workshop at Berkeley, I put on a tutorial on using Jupyter Notebooks: a quick sprint over the basics and then examples for inline animations and videos, embedded iframes, and interactive plotting!

    Click the badge above to launch the tutorial on binder. You’ll want to check out the JupyterNotebookForGreatGood folder.

    Check out the repo it works off of here, where you can find local installation instructions.

  • Hypothesis Testing
       $$2+2=5$$       $$2+2\neq5$$   
    Tails $$0$$ $$0.5$$
    Heads $$0$$ $$0.5$$

    To celebrate the latest stable version of Applied Statistics for Neuroscience, here’s a tutorial on hypothesis testing, based on the lecture notes for the course. Make sure to check out the whole course if you liked this snippet!

  • Fréchet Derivatives 2: Linear Least Squares

  • Fréchet Derivatives 1: Introduction

  • How Long is a Matrix?

  • Hypothesis Testing in the Wild

    Apart from being an interesting exercise in the real-life uses of probability, this example, with its massive gap between the true negative rate and the negative predictive value, highlights the importance of thinking critically (and Bayesian-ly) about statistical evidence.

  • A Differential Equations View of the Gaussian Family

  • The Surprise Game

  • Mixture Models and Neurotransmitter Release
    import numpy as np
    
    def generate_number_releases(size=1):
      return np.random.poisson(lam=2.25, size=size)
    
    def generate_measured_potentials(size=1):
      release_counts = generate_number_releases(size=size)
    
      measured_potentials = [generate_measured_potential(release_count)
                             for release_count in release_counts]
    
      return np.asarray(measured_potentials)
    
    def generate_measured_potential(release_count):
      measured_potential = np.sum(0.4 +
    			0.065*np.random.standard_normal(size=release_count))
    
      return measured_potential
    

  • Tutorial on Linear Generative Models

    linear-generative-model

    Inspired by the discussion on linear factor models in Chapter 13 of Deep Learning by Courville, Goodfellow, and Bengio, I wrote a tutorial notebook on linear generative models, including probabilistic PCA, Factor Analysis, and Sparse Coding, with an emphasis on visualizing the data that is generated by each model.

    You can download the notebook yourself from GitHub or you can click the badge below to interact with it in your browser without needing a compatible Python computational environment on your machine.

    Binder

  • Graphical Model View of Discrete Channels

    fullgraphwithcolor

  • Linear Algebra for Neuroscientists

    matrixmultneuralcircuit

  • Tutorial Notebooks on Machine Learning in Python

    soil_banner

    Head to this GitHub link to check out a collection of educational Jupyter notebooks that I co-wrote as part of a workshop on data science.

  • Convolution Tutorials Redux

    Previously, I posted a link to some Jupyter-based tutorials on convolution that I wrote. In order to use them, you needed to install an appropriate computing environment.

    Now, thanks to the folks at binder and the magic of the cloud, you can just click this link and use them with nothing more than a web browser.

    Neat!

  • Statistics in One Sentence

    Statistics is the study of pushforward probability measures from a probability space of datasets to a measurable space of statistics under maps that we call statistical procedures.

  • Paper in Print at Journal of Neurophysiology

    macleanfigure

    To evaluate the developmental ontogeny of spontaneous circuit activity, we compared two different areas of sensory cortex that are also differentiated by sensory inputs that follow different developmental timelines. We imaged neuronal populations in acute coronal slices of mouse neocortex taken from postnatal days 3 through 15. We observed a consistent developmental trajectory of spontaneous activity, suggesting a consistent pattern for cortical microcircuit development: anatomical modules are wired together by coherent activations into functional circuits.

    The final version of my research paper with Jason MacLean on the developmental time course of spontaneous activity in mouse cortex is now available through the Journal of Neurophysiology.

    Check it out! You can also read a layman’s summary here.

  • Guest post at Because-Science

    …nature adopts a strategy straight out of Saw II: motor neurons are, from the moment they are born, searching frantically for the antidote to a poison that will kill them when a timer runs out. They are, like Biggie Smalls, born ready to die.

    Head to Because-Science to check out a fun little guest blog post I wrote explaining the process by which neurons and muscles find each other!

  • What is information theory? What does entropy measure? Mutual information?

    wthPartitions

  • Convolution Tutorial IPython Notebooks

    I recently gave a tutorial on convolutions. You can check out the IPython Notebooks at the GitHub repo for Berkeley’s Neuro Data Mining Group.

    For more information about the group, check out our website. Come join us if you’re interested!

  • What is Bayes' Rule?

    grid25

subscribe via RSS