Showing posts tagged biology

 

Seyfarth EA (2006). Julius Bernstein (1839-1917): pioneer neurobiologist and biophysicist. Biological cybernetics, 94 (1), 2-8 PMID: 16341542

We at Sick Papes do not operate under the illusion that we are only the folks that love a dank ‘ticle. But it’s rare to find a practicing researcher who also takes the time to investigate the important historical exploits of his/her field. Throughout his career, Ernst-August Seyfarth has done just that, authoring several papes about relatively obscure hero neurobiologists such as Ludwig Mauthner (1840-1894), who discovered the infamous Mauthner cell, Julia B. Platt (1857-1935), one of the first comparative neuro-embryologists, Tilly Edinger (1897-1967), an early paleoneurologist, and Johann Flögel (1834-1918), one of the first insect neuroanatomists. At the same time, Dr. Seyfarth sustained a thriving research program studying mechanosensation in spider slit sensilla, as well as spider behavior (at one point, investigating the origin of spider push-ups).

My favorite of Seyfarth’s historical narratives is the story of Julius Bernstein, a bad-ass German physiologist who was the first to describe the action potential (despite this noteworthy achievement, I had never heard of the guy). Bernstein was born in Berlin, where, after completing medical school, he returned to do a post-doc with the perpetually belligerent Hermann von Helmholtz, a pioneer of physics, physiology, philosophy, and competitive lifelong moustache consistency. While trying to best his boss with a blend of buffed bald-head/bushy beard, Bernstein developed a brilliant contraption called a differential rheotome, or “time slicer”.

The rheotome (shown above) was a sort of “ballistic galvanometer” that consisted of a turntable that opened and closed two circuits as it rotated. One circuit transiently stimulated the nerve, and the other instantaneously recorded its response. Both the stimulation and recording periods lasted only a fraction of a millisecond, depending on the speed of the turntable. The temporal offset of the stimulus and recording epochs could also be changed by adjusting the angle between the two switches. By varying these parameters and averaging over many trials, Bernstein built up a full picture of a single action potential in a frog nerve.

After this achievement, Seyfarth describes how Bernstein went on to formulate an eerily prescient “Membrane Theory of Electrical Potentials”, based on the predictions of the hauntingly familiar Nernst equation. It’s insane how right Bernstein was most of the time. Maybe it’s just that his stupid ideas and shitty experiments were forgotten? It’s hard to say, but Seyfarth attributes Bernstein’s stellar accomplishments to his unflappable attention to detail.

Bernstein is great, but Seyfarth deserves a boatload of credit here too, for illuminating Bernstein’s illustrious career and writing it down in this sick historical pape. Now that he’s retired, I hope that Dr. Seyfarth returns to the archives to dig up more sick historical anecdotes about neurobiologists of yore.

Contributed by butthill

Dunlap, K. and Mowrer, O. H. (1930). Head movements and eye functions of birds. J. Comp. Psychol. 11, 99–113.

People make a lot of hay out of the series of photographs by Eadweard Muybridge that Governor Leland Stanford commissioned to figure out whether all four hooves of a galloping horse are airborne at the same time. Muybridge ingeniously used an array of cameras that were triggered sequentially by the horse busting through a series of trip wires. The result of this experiment was that there was definitely a moment in the horse’s gait when all four feet were off the ground. (Stanford went on to found the university that produced Olympic water polo player Tony Azevedo; Muybridge ended up shooting a man who slept with his wife, but was acquitted on grounds of “justifiable homicide”.)

Now I don’t want to dump on the birth of cinema, but I’ve just been out watching horses gallop around all morning, and I’m pretty sure that I could reproduce Muybridge’s experiment with some study drugs and a mug of properly mulled cider. Things are always clearer in hindsight, but it didn’t take much squinting to convince me that horses are airborne every quarter second or so. Perhaps Muybridge and Stanford were half blind from living in an era before proper sunglasses, or maybe horses were faster in the 19th century because there were no clocks and all the conductors had to count continuous Mississippis to keep the trains on time.

Whether or not the Muybridge horse study was necessary, subsequent developments in rapid picture-taking have proven incredibly useful for the study of biomechanics. Today I want to discuss an early example of how the camera can be used to compensate for the inability of us humans to fully appreciate animals. 

Many people have wondered, “What’s up with pigeons bobbing their heads all crazy while they walk”, but most people are too afraid to blog about it. As Dunlap and Mowrer, the authors of today’s sick pape, put it, “The forward and apparent backward movements of the head which pigeons, chickens, and certain other fowls display while walking have been commented on by various persons orally, but seldom in print.” 

It may have occurred to you that this jerky head movement is an accident of the pigeons walking gait, perhaps analogous to the swinging of a human’s arms. But this is wrong. In 1930, Dunlap and Mowrer took some great photos that proved that bird head bobbing is just an illusion. In fact, it is you, the viewer, who is lurching ferociously back and forth, and the bird is perfectly motionless! That’s not actually true. What is really happening, Dunlap and Mowrer found, is that when the bird’s body is moving, the head is completely still. In other words, the head is locked in position relative to the forward moving body. Then, when the body stops for a brief moment, the head thrusts rapidly forward to a new position. So, overall, the head is maintained in a stable position relative to the body. The stroboscopic photo above, from a sick follow-up pape by B.J. Frost in 1978, illustrates this nicely.

This head stabilization has obvious benefits for vision, as it is much more difficult to analyze a visual scene when your head is shaking. Another set of experiments by B.J. Frost in the 70’s clearly demonstrated that head-bobbing is controlled by vision, as pigeons walking on treadmills don’t bob at all (because the visual scene is stationary).

The findings of Dunlap and Mowrey in 1930, and subsequent work by B.J. Frost and other enthusiastic bird bio-mechanics, are a superb example of how the world is incredibly fast and confusing, and only photographic magic and detailed quantification can distill truth from all the chaos.

Contributed by butthill

Limits to sustained energy intake. XVIII. Energy intake and reproductive output during lactation in Swiss mice raising small litters. 
Zhao ZJ, Song DG, Su ZC, Wei WB, Liu XB, & Speakman JR (2013).
The Journal of experimental biology, 216 (Pt 12), 2349-58 PMID: 23720804

Although binging is often attributed to weak human character, a substantial binge can also help a man get in touch with his/her reckless animal roots. Whether it involves a steaming heap of elk intestines or 3 seasons of Arrested Development, there are some treats that evolution has wired animals to consume beyond the point of reasonable satiety. Giving in to these deep urges is one of the many so-called flaws that the Catholic Church utterly failed to eradicate from our animal constitution.

A recent binge was triggered by the current issue of The Journal of Experimental Biology, which contained no less than IV sick papes about mouse lactation from Dr. John Speakman and colleagues.  Further research revealed that, over the past decade, Speakman’s lab has published XVIII papers on this subject, each possessing the formulaic title: Limits to sustained energy intake., etc. This linear corpus of papes is ideally suited to sautéing an entire day in thick fatty mouse milk.

Each of these papes poses the same basic question: which factors determine an animal’s physiological limits? Speakman and colleagues study this question in lactating mice, who expend a massive amount of energy to produce milk for their thirsty pups. Two initial proposals were that milk production is limited by (I) the ability of the gut to digest food or (II) the efficiency of the mammary gland itself.

Through the first X papes in the series, Speakman and his jolly giants tested these hypotheses, as well as a couple other clever theories they dreamed up. My favorite among this back-catalogue is the evocatively titled: Limits to sustained energy intake. X. Effects of fur removal on reproductive performance in laboratory mice.

In this pape, the authors test the hypothesis that energy intake is limited by the capacity of an animal to dissipate heat. They increased the ability of lactating female mice to dissipate heat by shaving them bald as porpoises. Shaved mice ate more heartily and produced more milk, which in turn increased the size of their adorable mouse children. This result contradicted the long-held views that nursing performance is limited by the efficiency of the mother mouse’s digestion and subsequent milk production.

Although these initial results suggested that there might be one or a couple limitations to energy expenditure, the most recent papes (XIV - XVIII) show that the story is actually much more complicated. Under different environmental conditions, lactation efficiency and offspring growth are limited by several overlapping factors. There are also important differences across mouse strains. Despite the lack of simplicity in the underlying biology, the narrative organization of these XVIII papes that ask the same, seemingly basic, question, demonstrate an experimental doggedness that you got to respect.

Contributed by butthill

SICK PAPES SPECIAL ON CONTROVERSY: PART 2

Curr Biol. 2010 Sep 14;20(17):1534-8.
The role of the magnetite-based receptors in the beak in pigeon homing.
Wiltschko R, Schiffner I, Fuhrmann P, Wiltschko W.

VERSUS

Nature. 2009 Oct 29;461(7268):1274-7.
Visual but not trigeminal mediation of magnetic compass information in a migratory bird.
Zapka M, Heyers D, Hein CM, Engels S, Schneider NL, Hans J, Weiler S, Dreyer D, Kishkinev D, Wild JM, Mouritsen H.

AND

Nature. 2012 Apr 11;484(7394):367-70.
Clusters of iron-rich cells in the upper beak of pigeons are macrophages not magnetosensitive neurons.
Treiber CD, Salzer MC, Riegler J, Edelman N, Sugar C, Breuss M, Pichler P, Cadiou H, Saunders M, Lythgoe M, Shaw J, Keays DA.

There are some scientific subjects that attract recreational bedlamites like seagulls to a coastal landfill. My favorite of these is magnetoreception: the ability of an animal to perceive an ambient magnetic field. Lots of animals can do this—birds, insects, reptiles— and some of them use the earth’s weak magnetic asymmetry to achieve extraordinary feats of navigation. For example, scientific hero Ken Lohmann has shown that sea turtles navigate thousands of miles through the horrific salty ocean in order to meet their half-shelled-brethren at a specific location for an annual Bacchanalian picnic. Ken’s lab also found that if you move a spiny lobster 20 miles in any direction from its preferred hangout spot, it immediately returns directly to its headquarters using cues from the earth’s magnetic field. These and bajillions of other examples demonstrate that many of the earth’s macro-biotic inhabitants can use a magnetic sense to cruise around in magnificent style, which, in my humble opinion, is absolutely fucking fantastic.

Returning to the bedlamites. There are two dudes in particular that illustrate the fact that magnets exert a certain ineffable force upon the zanier castes of our super-organismic civilization. The first of these is shown in the video above: Mr. Harry Magnet, whose extensive pape on personal perception of magnetic fields cannot be deemed sick or otherwise, because it has not undergone rigorous peer review (but we welcome submissions).

The second example comes from Alane Jackson, the purveyor of a theory called magnetrition, which was first explained to me by a youth soccer referee who lived in a wigwam on an magnetically neutral island in the middle of an Alaskan lake. Basically, Alane’s idea is that mitochondria are magnetically charged, and that jostling our cells around causes cytoplasmic stirring, thereby promoting health. I also recommend another section of Alane’s website, titled Smoking is good.

Buried beneath all of this absolutely essential HTML is an equally intense scientific debate about the mechanisms by which real animals measure magnetic fields. So far, two basic mechanisms have been proposed:

(1) MAGNETITE. The magnetite hypothesis was inspired by the observation that some magnet-loving bacteria produce magnetite (Fe3O4) crystals that cause them to align with and cruise along the local magnetic vibe. Because magnetite has also been found in the snouts/beaks of fish and birds, it was suggested that the rotation of these crystals could be detected by mechanosensory neurons in the brain. Smaller, “superparamagnetic crystals” have also been found in bird beaks. These crystals do not have a permanent magnetic moment, and therefore do not individually rotate to align with the earth’s magnetic field. However, large arrays of these superparamagnetic crystals would attract and repulse each other under different magnetic field conditions, generating forces that could, in principle, be sensed by neurons.

(2) CRYPTOCHROME. This second mechanism is even bonkers-er. Some radical-pair chemical reactions can be influenced by magnetic forces—one example is the absorption of light by retinal photopigments called cryptochromes. The idea is that the ambient magnetic field would alter the rate of cryptochrome photo-isomerization, so that if a bird were gazing upward at a clear blue sky, it could actually “see” a hazy magnetic field image layered on top of the normal visual scene.

The argument surrounding these two mechanisms is best exemplified in the bird magnetoreception literature, which has been enriched in recent years by a flurry of combative pape-slinging. In one camp, (1) the Wiltschkos and their pals claim that birds use little magnetite particles in their beaks to detect magnetic fields, while in another camp (2) Henrik Mouritsen and his pals  claim that magnetoreception arises in the retina, mostly likely through cyptochrome. (3) David Anthony Keays and his buds weighed in on side 2 of the fracas last year, when they suggested that those magnetite particles in the beak are located inside little pieces of biological irrelevance called macrophages.

Although the field of magnetoreception is confusing and controversial, one cannot help but delight in the titillation-level of the questions and the unfettered academic shit-hurling. Magnetoreception is clearly the modern El Dorado, attracting both well-funded academics and itinerant kooks. There is the important possibility that everybody is right— that birds have two independent magnetic senses and so do people, and the booty will be split evenly amongst the Professors and the online gurus. It seems much more likely to me, however, that this entire field is booby-trapped, and that all the magnet-lovers will end up stalking monkeys on a raft as the river below their feet slowly transforms into a cauldron of boiling soup.

Contributed by butthill

Blackiston DJ, & Levin M (2013). Ectopic eyes outside the head in Xenopus tadpoles provide sensory data for light-mediated learning. The Journal of experimental biology, 216 (Pt 6), 1031-40 PMID: 23447666

Our pals in the Department of Futuristic Neuroscience have recently attracted a lot of attention for a whacky pape that demonstrated that one rat could (sort of) learn to detect signals recorded from another rat’s brain. The main finding of this study, that animals connected by electrodes tend not to ignore each other, is fuzzily heartwarming, but ranks close to Feline papillomavirus on the grand scale of illness.

A much more compelling example of the brain’s dynamic ballsiness (i.e., the ability of neural circuits to learn to detect unfamiliar sensory stimuli), is described in a recent exercise in sickness by the duo of Blackiston and Levin. These pre-pubescent-frog-loving maniacs surgically removed the eyeballs of a couple hundred tadpoles, and then transplanted donor eyeballs onto different regions of the tadpole body (fanny, haunch, etc). The donor eyeballs were labeled with a fluorescent protein (RFP), so they could monitor the axons of the transplanted optic nerve. Most of the resettled eyeballs did not successfully innervate the central nervous system, but about ¼ of them managed to connect to the gut, and another ¼ innervated the spinal cord.

Blackiston and Levin then tested the population of chimeric tadpole beasts with an associative learning task that required the tadpoles to detect light in order to avoid an electric shock. A small number of the 200 freak tadpoles could learn to avoid red light, despite the fact that they did not have normal eyes. All of the successful learners had transplanted eyeballs that innervated the spinal cord.

It’s already incredible that transplanted eyeballs can successfully wire up to the spinal cord; the fact that tadpoles can then use the whimsical retina/spinal cord circuit in a behavioral task seems, at first glance, to defy the 14th amendment of biology. But we’ve known for a long time that the nervous system is able to adapt to novel inputs. For example, the visual cortex of blind people can be colonized by auditory and somatosensory inputs, allowing them to fluently read using touch (Braille) and echolocate like bats (??).

The interesting question is not whether animals can learn to detect exogenous signals (e.g., spikes transmitted from a Brazilian rat’s brain), but how the hell the nervous system pulls out such meaningful signals of hope against the noisy background of torrential chaos and despair. This is some boring biology shit. In the meantime let’s get psyched about building an exoskeleton for the World Cup and teaching Big-Dog to throw cinder blocks.

Contributed by butthill

imageimage

Waters, J., Holbrook, C., Fewell, J., & Harrison, J. (2010). Allometric Scaling of Metabolism, Growth, and Activity in Whole Colonies of the Seed‐Harvester Ant Pogonomyrmex californicus The American Naturalist, 176 (4), 501-510 DOI: 10.1086/656266/>

We all know the feeling: You’re lying naked in a sun-soaked field after taking a fistful of mushrooms and watching waves of energy explode through your friends’ braincases. And no matter how long you watch the trees breathe, just can’t shake the question: “Where does my body end and the world begin?” Turns out this cosmic question has a hallowed tradition, and just about no one knows how to draw boundaries around a body. 

The little guys that fucks with our best minds most royally on this distinguished issue are the social Hymenoptera (ants, bees, and wasps). Dudes have been flubberbusting long and hard about whether we should think about the bees in a hive (or people in a city, or dicks in a game of dick jenga) as a wonderful communion of separate beings or as all just the dangly bits of one MegaMan. As the disturbing old saying goes, there’s many ways to skin a cat, but what perverted shitbag wants to to skin a cat a bunch of different ways? So the world was on the verge of turning its back forever on this age old question and exploding in a supernova of its own ignorance.

That is until some brave souls (Dr. James Waters and colleagues) figured out the illest of ways to blow the lid off a part of this problem. But let me slow my roll a bit and fill in the rubbly background that makes it crystalline just way this pape is so sick:

For just about forever, we’ve known one thing about bodies for sure: how fast they use up energy (their metabolic rate) has a crazy strong relationship with how big they are. Specifically, bigger things use less energy per unit body mass than small things. So basically, one giant 100 kg rat should be using up energy much slower than 100 puny 1 kg rats, even though the grand total of rat meat is the same in both cases.

So, what these dudes did was investigate this same problem in ants, the superest of superorganisms. In an ant colony, you should be able to predict how much energy the whole colony is using based on their average body mass (e.g. you should be able to just sum up the metabolic rate of a bunch of small ants). But when they put whole colonies of these little guys in a fancy box that measures how fast they’re using up their cosmic energies, turns out they’re doing exactly not that. Specifically, their metabolic rate is what you’d predict for a single organism that had the collective mass of all the ants. And metabolic rate changes with colony size the same way it does for bigger bodies. So, in summary, ants (a) are fucking crazy, and (b) on both the mystical and physical planes appear to be working just like a single, physically integrated body does. Why? Lord knows. But this paper is opening up ways to answer that question and new ways to think about the most basic aspects of how organisms are put together. Sick.

Oh, and ants do this too.

Contributed by jamescrall

Krieger J, Grandy R, Drew MM, Erland S, Stensmyr MC, Harzsch S, & Hansson BS (2012). Giant Robber Crabs Monitored from Space: GPS-Based Telemetric Studies on Christmas Island (Indian Ocean). PloS one, 7 (11) PMID: 23166774

Anybody who knows me will tell you that I have a soft spot in my heart for the hard shell of our fellow crab-man. For all the land-lubbers out there, the crab is a heavily-armored, sideways-running little fellow that specializes in shoveling detritus (= trash) into its adorable little mouth with an often over-sized claw appendage. To me, the main appeal of the crab is its dignified air of feistiness. Unlike most softy animals, crabs do not like to be handled, and if you pick them up they will pinch you with all the hatred they can muster. Crabs also have beautiful brains and, as we shall see, possess a unique brand of crusty intelligence.

There are all sorts of freaky crabs out there, but the most inspiring is the absurdly proportioned giant robber crab that resides on Christmas Island in the South Pacific. These friggin crabs can weigh about 10 lbs, they climb trees, and they rip apart coconuts and devour them like their mike’s and ike’s (sic). This cushy crab lifestyle allows them to live to the ripe age of 60. Some folks in the recently prolific Hansson Lab at the Max Planck Institute for Chemical Ecology somehow convinced somebody to let them go to Christmas Island and study the navigational abilities of giant robber crabs. Their experimental protocol went as follows:

  1. Snatch a robber crab
  2. Glue a GPS tracking device to its carapace
  3. Sit back and watch where it goes via satellite transmission
  4. Snatch the crab again, put in a trash bag, transport it across the island, and release it
  5. See if the crab can get back home

This pape clearly demonstrates that robber crabs live a rambling lifestyle. After spending a few days or weeks in one area, a crab will get the itch to roam, and will pick up and haul his barnacled ass from the inland rainforest to the seashore. After a spell at the shore, he’ll pack up and hitch back into the rainforest. Over time, robber crabs learn preferred routes that they repeatedly traverse throughout their long lives. Most remarkably, if you put a crab in a trash bag and haul it a mile away, it will almost immediately return to the spot where you snatched it.

Aside from the obvious conclusion that robber crabs are dynamic, intelligent beasts, this pape also establishes the robber crab as an important model system for studying what it means to live a deeply fulfilling life. 

Contributed by butthill
Druckmann, S., & Chklovskii, D. (2012). Neuronal Circuits Underlying Persistent Representations Despite Time Varying Activity. Current Biology, 22 (22), 2095-2103 DOI: 10.1016/j.cub.2012.08.058
To celebrate the dawn of December, a month of intense introspection and widespread brooding, Sick Papes brings you an exclusive soul-wrenching interview with neuroscientist and celebrity theoretician, Dr. Shaul Druckmann. Shaul’s recent pape (w/ Mitya Chklovskii) suggests a fresh answer to a beguiling question- how does the brain maintain persistent representations despite the fact that neuronal activity is constantly changing?
Personal experience tells us that the brain can maintain stable representations of images, numbers, and ideas for seconds and minutes. However, the activity of neurons in brain regions thought to be involved in working memory, such as prefrontal cortex, varies on a much faster time scale, (~10-50 milliseconds). Shaul’s pape proposes a network model, called FEVER, which can maintain persistent representations even as the activity of individual neurons varies. It turns out that this network model has many features in common with the organization of real cortical networks.
SP: If I’ve got my mules in order, your model network is constructed such that the receptive field of each neuron is equivalent to a weighted sum of the receptive fields of all other neurons in the network, and the weights in this weighted sum are the strength of synaptic connections between neurons. This allows the activity of individual neurons to vary, while the output of the network remains constant. This structure seems precarious. If I were to go into your brain and cut one single synaptic connection, how would this affect stable representations in a dense FEVER network? In other words, how robust is this network to wanton destruction?
 SD: Yup, your mules are definitely in order and marching. As you say, destroying synaptic connections will momentarily throw the network off balance. However, since the representation is highly overlapping and there are many ways to represent each stimulus there would be no problem readjusting the network so as to ignore the destroyed part of the network. Given the high degree of overcompleteness that we suspect exists in cortex, there is a lot of room to recover from damage.
SP: In his Tractatus, Wittgenstein proposes that, “A logical picture of facts is a thought”; in other words, that thoughts must adhere to the same logical form as things in the real world. Agree or disagree?
SD: Wittgenstein huh? I am not sure I can even properly pronounce his name, much less understand his writings. The end of my serious reading of philosophical literature timeline is more or less with Kant… Regardless, I am not sure I read the sentence the same way you do. “A logical picture of facts is a thought”. First, I like the stress on the term “picture of facts” which for me relates the thought to the many aspects of taking a picture: we select what to put in our frame and what to keep out, the lighting we throw on the objects matters a lot as well as the angle and ultimately it needs to be developed to become a real thing (okay maybe the last one was a stretch). Regarding what thoughts must adhere to, I am not sure thoughts are under control, so lets read “thoughts” as “theories”. I strongly believe that theories must first and foremost have a sound logical structure. In one interpretation that is pretty straightforward since it just means that the math needs to check out. However, I believe that, somewhat related to that sentence, one of the most interesting things about theories is that they rearrange facts that we thought we previously knew into a new order. If that new order makes more “sense” and teaches you (the experts) new things about the facts then the theory is actually valuable. Anyhow, this sounds like something better talked about over a beer…
SP: Your pape addresses how a brain might hold onto specific representations for periods of seconds, even as the activity of individual neurons varies wildly during this period. A slightly different problem is how human thought and perception seems to occur on the time-scale of seconds, despite the fact that neural activity varies on the order of milliseconds. Do you think this is simply a matter of perception, or do evolving network dynamics across longer time scales matter?
SD: Actually our first draft discussed that briefly, but reviewers hated it since it was too speculative. I think there are two possibilities, one is that representation is constantly changing, but there is a little leprechaun working really hard in our brain all the time to make sure our conscious perception is smooth (this may sounds crazy, but think change-detection blindness). The other is that the networks themselves bridge the gap between the time scale of neural activity (milliseconds) and the time scale of the world (seconds say) by mechanisms such as the one we describe in order to allow downstream circuits a smooth readout of the representation and the leprechaun to have a much more relaxed life. Which is true? I really don’t know.
SP: When you are building a model, do you start with the acronym first and work backward? Or do you build the model first and then tweak it until it fits with a catchy acronym?
SD: Given the allowed artistic freedom of basically picking any random word and letter within it for an acronym it is pretty easy to find one once the work is done. But what you suggests sounds fun, randomly thinking up an acronym, finding the most reasonable sentence you can attach to it and seeing whether that inspires and idea worth working through.
SP: Do you think the phrase “persistent representation” accurately describes what is happening in the brain during working memory? For example, remembering a phone number requires a certain amount of active rehearsal, and is susceptible to distraction. Why must prefrontal cortex maintain a representation within itself, rather than relying on repeated structured inputs from other sensory networks?
SD: In the delayed-match-to-sample working memory task design as much as possible is done to eliminate the possibility of input driven memory (turning stimulus on only transiently, long delay periods). Therefore, that is less of an option in my opinion. More generally though, if it is an input driven memory then one has to answer the question how does whatever circuit that provides the input keep its ability to provide an input for such a long time despite the transient stimulus. Then all our explanations would need to be shifted to that area. I don’t think it has been worked out in an airtight manner that this isn’t a possibility, but I think it is less likely.
SP: In Borges’ story, ”Funes the Memorious”, a young boy falls off a horse and loses his ability to forget. His life is haunted by the banal details of every moment he has ever experienced, including all the associated physical and emotional sensations. Are there certain conditions under which a FEVER network architecture could result in such a condition?
SD: Good point! In fact the way we develop the math in the first section leads to a network with an infinite integration, which is exactly Borges’ idea, sans the horse. That’s why we later add the scaling factor to the equation that allows you to have a very long, but not infinite, time constant. Otherwise, with an infinite time constant one would run into  all kinds of problems such as saturation due to the integration of all the (banal) past stimuli ever encountered.
SP: One method to test the relevance of the FEVER network is to compare the synaptic structure of a cortical network to the range of eigenvalues predicted by the model. Are there any unexpected features of the eigenspectrum that you could look for in real cortical networks? You mention a few in the paper that support your model (e.g., prevalence of reciprocal connections), but are there others that would be worth looking for?
SD: In terms of synaptic reconstruction, I think the neat thing to do is to try to map the receptive field of neurons and then do EM reconstruction a la Denk. Then one option is trace down the axon of a single cell, find all the post-synaptic cells, sum up their receptive fields and see if you come up with the original neuron’s own receptive field (I guess you could do it with trans-synaptic viruses in principle too). The tricky part is that you need to know the weight of the connection, which might not be easy/possible from EM (actually everything about that idea is tricky). More generally, I think the most interesting concept to look for is the idea of coding vs. non-coding directions in activity space which our theory suggests. Not all activity patterns were created equal! I believe this has serious implications for how to interpret multi-neuron population recordings and that is something I want to take a closer look at.
SP: What is the sickest pape you have read in the last 2 months?
SD: Sickest pape: ice in Mercury’s north pole. Ice was apparently delivered by comets or asteroids! Surface temperatures of 400 celsius (not in the shade) but alien (to mercury) ice in the deep shade still survived. How cool is that?

Druckmann, S., & Chklovskii, D. (2012). Neuronal Circuits Underlying Persistent Representations Despite Time Varying Activity. Current Biology, 22 (22), 2095-2103 DOI: 10.1016/j.cub.2012.08.058

To celebrate the dawn of December, a month of intense introspection and widespread brooding, Sick Papes brings you an exclusive soul-wrenching interview with neuroscientist and celebrity theoretician, Dr. Shaul Druckmann. Shaul’s recent pape (w/ Mitya Chklovskii) suggests a fresh answer to a beguiling question- how does the brain maintain persistent representations despite the fact that neuronal activity is constantly changing?

Personal experience tells us that the brain can maintain stable representations of images, numbers, and ideas for seconds and minutes. However, the activity of neurons in brain regions thought to be involved in working memory, such as prefrontal cortex, varies on a much faster time scale, (~10-50 milliseconds). Shaul’s pape proposes a network model, called FEVER, which can maintain persistent representations even as the activity of individual neurons varies. It turns out that this network model has many features in common with the organization of real cortical networks.

SP: If I’ve got my mules in order, your model network is constructed such that the receptive field of each neuron is equivalent to a weighted sum of the receptive fields of all other neurons in the network, and the weights in this weighted sum are the strength of synaptic connections between neurons. This allows the activity of individual neurons to vary, while the output of the network remains constant. This structure seems precarious. If I were to go into your brain and cut one single synaptic connection, how would this affect stable representations in a dense FEVER network? In other words, how robust is this network to wanton destruction?

SD: Yup, your mules are definitely in order and marching. As you say, destroying synaptic connections will momentarily throw the network off balance. However, since the representation is highly overlapping and there are many ways to represent each stimulus there would be no problem readjusting the network so as to ignore the destroyed part of the network. Given the high degree of overcompleteness that we suspect exists in cortex, there is a lot of room to recover from damage.

SP: In his Tractatus, Wittgenstein proposes that, “A logical picture of facts is a thought”; in other words, that thoughts must adhere to the same logical form as things in the real world. Agree or disagree?

SD: Wittgenstein huh? I am not sure I can even properly pronounce his name, much less understand his writings. The end of my serious reading of philosophical literature timeline is more or less with Kant… Regardless, I am not sure I read the sentence the same way you do. “A logical picture of facts is a thought”. First, I like the stress on the term “picture of facts” which for me relates the thought to the many aspects of taking a picture: we select what to put in our frame and what to keep out, the lighting we throw on the objects matters a lot as well as the angle and ultimately it needs to be developed to become a real thing (okay maybe the last one was a stretch). Regarding what thoughts must adhere to, I am not sure thoughts are under control, so lets read “thoughts” as “theories”. I strongly believe that theories must first and foremost have a sound logical structure. In one interpretation that is pretty straightforward since it just means that the math needs to check out. However, I believe that, somewhat related to that sentence, one of the most interesting things about theories is that they rearrange facts that we thought we previously knew into a new order. If that new order makes more “sense” and teaches you (the experts) new things about the facts then the theory is actually valuable. Anyhow, this sounds like something better talked about over a beer…

SP: Your pape addresses how a brain might hold onto specific representations for periods of seconds, even as the activity of individual neurons varies wildly during this period. A slightly different problem is how human thought and perception seems to occur on the time-scale of seconds, despite the fact that neural activity varies on the order of milliseconds. Do you think this is simply a matter of perception, or do evolving network dynamics across longer time scales matter?

SD: Actually our first draft discussed that briefly, but reviewers hated it since it was too speculative. I think there are two possibilities, one is that representation is constantly changing, but there is a little leprechaun working really hard in our brain all the time to make sure our conscious perception is smooth (this may sounds crazy, but think change-detection blindness). The other is that the networks themselves bridge the gap between the time scale of neural activity (milliseconds) and the time scale of the world (seconds say) by mechanisms such as the one we describe in order to allow downstream circuits a smooth readout of the representation and the leprechaun to have a much more relaxed life. Which is true? I really don’t know.

SP: When you are building a model, do you start with the acronym first and work backward? Or do you build the model first and then tweak it until it fits with a catchy acronym?

SD: Given the allowed artistic freedom of basically picking any random word and letter within it for an acronym it is pretty easy to find one once the work is done. But what you suggests sounds fun, randomly thinking up an acronym, finding the most reasonable sentence you can attach to it and seeing whether that inspires and idea worth working through.

SP: Do you think the phrase “persistent representation” accurately describes what is happening in the brain during working memory? For example, remembering a phone number requires a certain amount of active rehearsal, and is susceptible to distraction. Why must prefrontal cortex maintain a representation within itself, rather than relying on repeated structured inputs from other sensory networks?

SD: In the delayed-match-to-sample working memory task design as much as possible is done to eliminate the possibility of input driven memory (turning stimulus on only transiently, long delay periods). Therefore, that is less of an option in my opinion. More generally though, if it is an input driven memory then one has to answer the question how does whatever circuit that provides the input keep its ability to provide an input for such a long time despite the transient stimulus. Then all our explanations would need to be shifted to that area. I don’t think it has been worked out in an airtight manner that this isn’t a possibility, but I think it is less likely.

SP: In Borges’ story, ”Funes the Memorious”, a young boy falls off a horse and loses his ability to forget. His life is haunted by the banal details of every moment he has ever experienced, including all the associated physical and emotional sensations. Are there certain conditions under which a FEVER network architecture could result in such a condition?

SD: Good point! In fact the way we develop the math in the first section leads to a network with an infinite integration, which is exactly Borges’ idea, sans the horse. That’s why we later add the scaling factor to the equation that allows you to have a very long, but not infinite, time constant. Otherwise, with an infinite time constant one would run into  all kinds of problems such as saturation due to the integration of all the (banal) past stimuli ever encountered.

SP: One method to test the relevance of the FEVER network is to compare the synaptic structure of a cortical network to the range of eigenvalues predicted by the model. Are there any unexpected features of the eigenspectrum that you could look for in real cortical networks? You mention a few in the paper that support your model (e.g., prevalence of reciprocal connections), but are there others that would be worth looking for?

SD: In terms of synaptic reconstruction, I think the neat thing to do is to try to map the receptive field of neurons and then do EM reconstruction a la Denk. Then one option is trace down the axon of a single cell, find all the post-synaptic cells, sum up their receptive fields and see if you come up with the original neuron’s own receptive field (I guess you could do it with trans-synaptic viruses in principle too). The tricky part is that you need to know the weight of the connection, which might not be easy/possible from EM (actually everything about that idea is tricky). More generally, I think the most interesting concept to look for is the idea of coding vs. non-coding directions in activity space which our theory suggests. Not all activity patterns were created equal! I believe this has serious implications for how to interpret multi-neuron population recordings and that is something I want to take a closer look at.

SP: What is the sickest pape you have read in the last 2 months?

SD: Sickest pape: ice in Mercury’s north pole. Ice was apparently delivered by comets or asteroids! Surface temperatures of 400 celsius (not in the shade) but alien (to mercury) ice in the deep shade still survived. How cool is that?

Contributed by butthill
Mainen, Z., & Sejnowski, T. (1995). Reliability of spike timing in neocortical neurons Science, 268 (5216), 1503-1506 DOI: 10.1126/science.7770778
It is with great ambivalence that we recently learned that Hostess Bakeries, the corporation that coated generations of sniveling youngsters with a thin sticky film of fructose, has permanently extinguished its great roaring Twinkie ovens. Since the Great Depression, the specter of Hostess has haunted the American childhood, transforming many precocious naturalists-to-be into taffy-gargling, scrap-booking diabetic uncles. My own personal savior from the purgatory of Ding-Dong-dom was the health-food alternative 4” Table Talk Pie, which came (and continues to come) packed with one of 63 delicious fillings, including (if my memory serves) such classics as apple crumb, pumpkin marshmallow, and raspberry bourbon.
There are striking parallels between the spheres of snack foods and papes . The artisanal cheeses and home-butchered pig jowls are the niche journals, often packed with abstruse but thorough minutiae: satisfying to the connoisseur but unappealing to the masses. At the other end of the spectrum are the sticky pre-packaged pastries, high profile journals that publish short format papers designed to maximize caloric efficiency. Wrapped in sexy packaging and found in gas stations worldwide, these tempting morsels promise redemption, but often leave the consumer with a raw, yawning hunger. As in the world of gas station snacking, it is rare to find a short format paper that delivers true satisfaction.
A pape that I consider a gas station staple is “Reliability of Spike Timing in Neocortical Neurons”, by Zach Mainen and Terry Sejnowski. Published in Science in 1995, this pape uses a series of straightforward electrophysiology experiments to make the argument that the timing of action potential firing can be precise and reliable, under the right conditions.
Previous experiments (and many since) have found that the temporal precision of neuronal spikes is variable. For example, if you record from a neuron in visual cortex and repeatedly present the same visual stimulus, the number and timing of action potentials fired on each trial is highly irregular. This fact disturbed neuroscientists who were trying to understand how the brain builds a reliable representation of the world, and it led to some nifty ideas about what it really means for neurons to “encode information”. 
Taking a different approach, Mainen and Sejnowski asked what happens if you provide a neuron with a more naturalistic input than a boring square-wave stimulus. Recording from cortical neurons in brain slices, they injected current pulses that mimicked typical synaptic inputs. They found that these noisy stimuli produced highly reliable firing patterns, with a precision of ~1 ms, while square current pulses resulted in more variable spiking responses. This result suggests that the variability of spiking activity arises from noise in the synaptic input, rather than noisiness of the neuron itself. Although this does not fully explain why neural responses in vivo are often variable (there are many other reasons for this), it demonstrates that neurons in the cortex at least have the ability to encode information with high temporal fidelity. Reliability of spike-timing would, in principal, allow the brain to synchronize action potentials across neurons through network oscillations, and do lots of other interesting shit.
The reason that this humdinger works well as a Science pape is that it presents a single digestible idea that can be conveyed in a glossy 3 figure package. Unfortunately, most present-day short-format papes take the less elegant route of cramming five years worth of experiments into 1500 carefully chosen words. This is not the fault of the authors— for some reason, the two top journals insist that the papes they publish adhere to strict length requirements. Although some Nature and Science papes can be molded into delicious Table Talk delicacies, many others that would work well as Thanksgiving feasts end up tasting like friggn Ding-Dongs.

Mainen, Z., & Sejnowski, T. (1995). Reliability of spike timing in neocortical neurons Science, 268 (5216), 1503-1506 DOI: 10.1126/science.7770778

It is with great ambivalence that we recently learned that Hostess Bakeries, the corporation that coated generations of sniveling youngsters with a thin sticky film of fructose, has permanently extinguished its great roaring Twinkie ovens. Since the Great Depression, the specter of Hostess has haunted the American childhood, transforming many precocious naturalists-to-be into taffy-gargling, scrap-booking diabetic uncles. My own personal savior from the purgatory of Ding-Dong-dom was the health-food alternative 4” Table Talk Pie, which came (and continues to come) packed with one of 63 delicious fillings, including (if my memory serves) such classics as apple crumb, pumpkin marshmallow, and raspberry bourbon.

There are striking parallels between the spheres of snack foods and papes . The artisanal cheeses and home-butchered pig jowls are the niche journals, often packed with abstruse but thorough minutiae: satisfying to the connoisseur but unappealing to the masses. At the other end of the spectrum are the sticky pre-packaged pastries, high profile journals that publish short format papers designed to maximize caloric efficiency. Wrapped in sexy packaging and found in gas stations worldwide, these tempting morsels promise redemption, but often leave the consumer with a raw, yawning hunger. As in the world of gas station snacking, it is rare to find a short format paper that delivers true satisfaction.

A pape that I consider a gas station staple is “Reliability of Spike Timing in Neocortical Neurons”, by Zach Mainen and Terry Sejnowski. Published in Science in 1995, this pape uses a series of straightforward electrophysiology experiments to make the argument that the timing of action potential firing can be precise and reliable, under the right conditions.

Previous experiments (and many since) have found that the temporal precision of neuronal spikes is variable. For example, if you record from a neuron in visual cortex and repeatedly present the same visual stimulus, the number and timing of action potentials fired on each trial is highly irregular. This fact disturbed neuroscientists who were trying to understand how the brain builds a reliable representation of the world, and it led to some nifty ideas about what it really means for neurons to “encode information”.

Taking a different approach, Mainen and Sejnowski asked what happens if you provide a neuron with a more naturalistic input than a boring square-wave stimulus. Recording from cortical neurons in brain slices, they injected current pulses that mimicked typical synaptic inputs. They found that these noisy stimuli produced highly reliable firing patterns, with a precision of ~1 ms, while square current pulses resulted in more variable spiking responses. This result suggests that the variability of spiking activity arises from noise in the synaptic input, rather than noisiness of the neuron itself. Although this does not fully explain why neural responses in vivo are often variable (there are many other reasons for this), it demonstrates that neurons in the cortex at least have the ability to encode information with high temporal fidelity. Reliability of spike-timing would, in principal, allow the brain to synchronize action potentials across neurons through network oscillations, and do lots of other interesting shit.

The reason that this humdinger works well as a Science pape is that it presents a single digestible idea that can be conveyed in a glossy 3 figure package. Unfortunately, most present-day short-format papes take the less elegant route of cramming five years worth of experiments into 1500 carefully chosen words. This is not the fault of the authors— for some reason, the two top journals insist that the papes they publish adhere to strict length requirements. Although some Nature and Science papes can be molded into delicious Table Talk delicacies, many others that would work well as Thanksgiving feasts end up tasting like friggn Ding-Dongs.

Contributed by butthill

Sick Papes Special on Central Pattern Generators, Part 2

Finan DS, & Barlow SM (1998). Intrinsic dynamics and mechanosensory modulation of non-nutritive sucking in human infants. Early human development, 52 (2), 181-97 PMID: 9783819

There is a rumor in the scientific community that, by entering the right combination of keywords and Boolean expressions into PubMed, one can unlock a clandestine trove of high-quality, peer-reviewed, NIH-funded pornography. Although the cipher has not yet been cracked (it’s definitely not “intrinsic+sucking+dynamics-nutritive”), this exercise recently led us to a fetching little pape of disarming sickness.

In 1998, Donald Finan and Steven Barlow embarked on the exploration of the central pattern generator that regulates baby feeding, referred to in this pape as the “suck CPG”. Since Aristotle, and perhaps even before, it has been known that babies will happily slurp on a gorgeous nipple. However, it was not known whether this behavior requires mechanical feedback from the mouth, or if it is controlled exclusively by feedforward signals from the brain. To solve this inscrutable question, Finan and Barlow endeavored to control the mechanical stimulation that babies experienced as they sucked.

The sword in the stone that enabled these experiments was a home-made device called “the actifier” (described in a separate technical report).  The actifier was to pacifiers what the 1998 Arctic Cat Jag 440 Deluxe was to snowmobiles: that is, voluptuous. The actifier came fully loaded with 4 pairs of EMG electrodes (to record jaw muscle signals), a pressure transducer (to track the sucking amplitude of the baby), and a strain gauge (to measure jaw displacement). Unlike the ’98 Jag, she did not have a Big Slam sized cup-holder with built-in koozie, but made up for it with a pneumatic cylinder coupled to a breast-like “baglet”. My impression is that, in terms of blowing the mind of stoned high school kids who have just discovered the unfettered liberty of complimentary snowmobile test drives, the actifier would have given the Arctic Cat a run for its proverbial money.

That needlessly complex description should not dissuade you from fully appreciating this instrument. Let’s start over. Basically, what these guys built is a fake boob that measures baby sucking. It’s non-nutritive cause the babies don’t get fed. The experimenters pump the baglet in various ways, and observed that baby sucking activity depends on the baglet’s movements.

According to my lovely girlfriend, who is not herself a mother as far as I know, but an avid eavesdropper of breastfeeding street-women, it can actually be kind of hard to get your baby “to latch on” to the nipple. Indeed, the authors argue that “non-nutritive sucking is a deceptively complex behavior”, because it is not purely controlled by a central pattern generator, but also involves some sensory feedback. We should all be thankful that babies don’t go around needlessly sucking all over town, but only feel compelled to do so when they encounter a nipple or sinusoidally inflating baglet.

The thing is, although this pape is kind of ridiculous in its discussion of the “suck CPG” and the “cortical sucking area”, it’s got some deeply embedded sickness. If President Sarah Palin were to cut all funding for pure basic research, this is exactly the type of shit we would all love to work on. These Hoosiers got to spend months, or, more likely, years, building a nifty little virtual reality nip consisting of a bunch of sensors and actuators. I feel that this is something that the Burning Man community could really get behind, and probably ruin for everybody.

Contributed by butthill
hit counters
hit counter