Before scientists can unlock the secrets of the human brain, they must fully understand neurons—the cells of our brain, spinal cord and overall nervous system. Thousands of detailed neuron images, from different organisms, currently sit in individual data collections across the globe, comprising several petabytes of data altogether. Despite this plethora of data, made possible with advancements in brain cellular imaging, data standardization is still a major hurdle to gaining an accurate understanding about how neurons work.BigNeuron

Over the years, dozens of imaging paradigms and algorithms have been created for visualizing the 3D structure of neurons—leading to a variety of disparate datasets in the field. But neuroscientists widely agree that to solve the mysteries of the brain, they need to cross-compare these datasets. That’s why many of the field’s brightest minds are participating in BigNeuron, a community effort to define and advance the state of the art of single neuron reconstruction and analysis and create a common platform for analyzing 3D neuronal structure.

In an attempt to find a standard neuron reconstruction algorithm, BigNeuron will sponsor a series of international hackathons and workshops where contending algorithms will be ported onto a common software platform to analyze neuronal physical structure using the same core dataset. All ported algorithms will be bench-tested at Department of Energy supercomputing centers—including the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory (Berkeley Lab) and the Oak Ridge Leadership Computing Facility (OLCF) at the Oak Ridge National Laboratory (ORNL)—as well as Human Brain Project supercomputing centers, allowing the community to standardize optimal protocols for labeling, visualizing and analyzing neuronal structure and key biological features.

Read the full story here>