Showing posts tagged biology

Summer is the season of airy sabbaticals, and today Sick Papes is taking a well-deserved break from the sweaty crust of the ivory tower to visit the realm of Cronodon, one of the most intriguing and expansive concavities of the internet. Cronodon is simultaneously a comprehensive scientific resource and a mythical, futuristic universe. The site is packed with detailed, accurate explanations of complex scientific concepts, such as the physics of trees and the metaphors of string theory. The science is balanced by pure imagination: fantasy voyages through the ancient Cromlech and hand-drawn illustrations of chimerical beasts. Cronodon is a useful reference text for biology, physics, and computer science students, and a gallery that exhibits the fantastical imagination of a singular internet artist

Cronodon is curated by an alter ego known only as Bot (whose intentionally vague bio and CV can be found here). Though Bot’s relationship with the earthly domain remains inexplicit, what is clear is that he/she has legitimate academic credentials and an abundance of creative energy. In this extensive interview, we explore the motivations for creating and maintaining Cronodon, and discuss how Bot transitioned from a trajectory of scientific research to being the conservator of an online “museum of the future”.

SP:   Can you describe the genesis of Cronodon? Which came first, the thorough academic treatises of Bio-tech or the whimsical fantasy scenarios of the Dark Side?

BOT:  I can’t actually remember which page was the first I wrote, apart from the default homepage, but when the site was only a few pages it already had elements of both. My first navigation bar contained ’SpaceTech’ and ‘Dark Side’ but no ‘BioTech’, so I guess my initial feelings were to focus on science fiction and the ‘Dark Side’, but that soon changed

When I was very young, well into single figures, I started imagining and drawing alien creatures and already knew that I wanted to be a scientist. Science fiction got me interested in science fact, through programs like Dr Who, Space 1999, Star Trek et al., and also vice versa: reading books about dinosaurs and prehistoric invertebrates got me thinking very early on what other forms of life might be possible. By the age of ten, I was asking my teachers awkward questions like, ‘How do starfish breath?’ However, during later education my creativity was almost destroyed through neglect, as I was so busy studying. Only some years after completing my degrees did it begin to return, inspired by what I had learnt and seen in nature. Some of the drawings on Dark Side are childhood drawings, and so not as artistic as I would like them to be, others are more recent.

I also remember that the graphics came first. As I developed an interesting Pov-Ray graphic I wrote an article related to it. Now it works both ways: I often write the article and then produce the graphics for it, whether in 2D or 3D.

SP:  The Cronodon bio states that, “As a neurobiologist, Bot was privileged to study insects on Earth, looking at their remarkable sensory systems. Bot worked in a top UK university disguised as an Earthling. (That was before much of the funding for such research was withdrawn by the UK Government, which is apparently incapable of valuing anything other than a narrow-minded approach to economics).”
Can you elaborate on your departure from Academia? At what point did you decide to dedicate the majority of your effort to Cronodon?

BOT:  I departed full-time academic research about seven years ago, after six years as a post-doctoral research fellow. In short I left professional Academia because I felt it had become too controlled and too commercial. I moved to the US at this time and more-or-less immediately launched Cronodon, when somebody I know advised me too (‘no use hiding your light under a bushel,’ they later reminded me). In the first couple of years I kept my finger in academia by teaming up with academics in the US, though as a very part-time visiting academic rather than as an employee. Nevertheless some very useful research came from this partnership, so I was still publishing peer-reviewed papers even though I was not employed by an academic institution. Thus I left research gradually; indeed I still have one finger in one project as a voluntary contributor. I have also been lecturing on and off since, as a self-employed freelance consultant, and especially over the past year. Apart from the past 12 months, Cronodon has been my main focus for the past seven years. Ideally it would always be my main focus, though I enjoy lecturing too, and I need to do enough to pay the bills, it’s a matter of getting the balance right.

I left full-time research simply because I found the grant-awarding system frustrating; not so much because of the difficulty getting grants, though applying for them is overly-onerous and getting them is something of a political lottery, but because I got fed-up with being told what I could and could not research by grant-awarding bodies with a set agenda.

The most productive time I had in research was when we had an open grant in the US, but even in the US which traditionally values pure science and academia more than the UK, such grants are few and far between. More often academics have their hands tied by those who pay them. I simply was not happy in such a controlled environment, even though I was very successful as a researcher. Don’t misunderstand me, I researched some interesting problems and did some fun experiments, but I could not go where I wanted: it seemed to be forbidden by the system! I still read copious amounts of research articles, which shows that somewhere somebody is doing the kind of research that might have kept me onboard, but after 10 years in the system I realized that it wasn’t for me.

I needed to explore where my curiosity took me and be creative and to have freedom of thought. I personally could not find these things in a university research setting. As a freelance consultant I get a bigger choice of what to study and teach.

SP:   Much of the scientific material, particularly that relating to biology, is far more exhaustive, accurate, than the average peer-reviewed paper. Did you write these articles off the top of your head, or was there a process of meticulous research and editing?

SP:  Thank you, I do try to be as exhaustive and accurate as I can be in the time I have. In particular I feel that I have to offer something to the reader that other sites and publications do not and my own curiosity is insatiable. I love studying science subjects in breadth and depth and I often write articles to assist my own personal studies. Some articles I wrote more-or-less entirely from memory, although I always check facts I am unsure of against multiple sources if possible. My own notes, often collected over the years from many sources, are also sometimes directly transcribed into prose. (I have shelves full of thousands of pages of notes despite throwing most away to free-up space). I write articles with different target audiences in mind. Generally I try to cater for as wide an audience as possible, and so I often include elementary explanations in otherwise advanced articles. However, some articles are written primarily as summaries of my own extensive literature searches and as these are more likely to be read by academics I take extra care to add a bibliography.

As for editing: this has not been very meticulous and I do sometimes spot typos when referring back to my own work. I still have some articles in need of proof-reading! I do check by reading back paragraphs as I type them, but it is sometimes a while before I sit down and proofread whole passages of text and this usually happens when I need to remind myself of something or when I come to update the article.

SP:   Given the un-verifiability of most internet content, are you concerned that Cronodon might not be recognized as a reliable source? Why did you choose to not include comprehensive literature citations?

BOT:  There are several reasons for this. The main aim of Cronodon is to get the information out there, especially as much of the information is not easily accessible to the greater public. Indeed, much of the classical research on invertebrate zoology and botany is being forgotten. Initially the first articles were quite basic and largely written from memory for a general audience. Inline referencing with the Harvard style is time-consuming and to some extent I have to compromise otherwise Cronodon would contain far fewer articles.

As articles have been updated and expanded to include more details, and also as a more academic audience is being drawn to Cronodon, I have begun adding bibliographies, by which I mean a list of sources and suggested further reading, rather than inline references, since this is quicker. As science expands, more and more information becomes textbook knowledge and referencing these basic facts becomes less important, however, a couple of articles, which were written to appeal to academics, do have inline referencing. Time constraints aside, inline referencing also makes text harder to read for a lay or young student audience and I do not want to break up the flow of text in this way. I might consider using a numbering system, but for now I think bibliographies should suffice.

I do believe that credit is where credit is due, and adding bibliographies is an ongoing task. However, Cronodon does not claim any ideas to be its own unless specifically stated – it is important that readers can distinguish between my own expert opinion and what is widely accepted by the wider scientific community. I have no desire to use Cronodon as a platform neither to pass off my own opinions as facts, nor to claim credit for ideas which are not my own. Cronodon respects copyright and where my graphics are inspired by other sources then these sources are credited.

Some people have emailed me asking for references and I have always sent them the references requested. This has also been an increasing incentive to add bibliographies. My main incentive for adding bibliographies is to assist the interested reader with further research, and I don’t think I need to make them exhaustive. People should always consult multiple sources.

SP:   Cronodon is packed with extraordinary 3D models of animals, spaceships, and aliens. How do you build these illustrations?

I am glad you find some of my 3D models interesting, thank you. These models are created using Pov-Ray, which is a free C-style graphical scripting language and ray-tracer which can be downloaded online. In other words, you type a small computer program or script, which includes special graphical commands, such as: sphere { <0,0,0>, 1 scale <1,5,1> pigment { color rgbt <1,0.2,0.6,0.4> } normal { bumps 1 scale 0.2 } } which, when a light and camera are also added, will draw an elongated purplish sphere with a bumpy texture when the image is rendered.

I generally start with a mental image of what the object looks like and then construct it in Pov-Ray code, using simple shapes like spheres, cylinders, cones and boxes and more complex shapes like sphere sweeps (which are good for tentacles and other organic shapes), blobs and mathematical formulae. Additionally, there is a powerful technique called constructive solid geometry. This allows these elementary shapes to be joined and merged in various ways, or even subtracted from one-another.

This can require a lot of mental arithmetic, though with practice these calculations become more intuitive, almost sub-conscious. Just like when we catch a ball we don’t consciously compute the necessary vectors, so with practice I find I can position the elements of an object correctly in 3D space without performing any rigorous computations. Some of my earlier graphics were a bit inaccurate though! I sometimes also resort to the ‘Hollywood effect’ (a technique I learnt whilst on a tour around Universal Studios in LA) – making sure the graphic looks right even if it isn’t! However, these days I tend to produce more accurate 3D representations which can be viewed from any angle and still look correct, but that is more time-consuming and became easier with practice.

I prefer this code-up approach rather than the more artistic approach of manipulating meshes like virtual putty, which many ‘professional’ 3D graphics programs do. Even though the latter can potentially generate more professional-looking graphics, I just find them awkward to use, though I may sit-down and learn to use them properly one day. I am not one of these people who puts in weeks of time to a single graphic, trying to make it look real, like a photograph. The biological graphics I try to make somewhat realistic but also clear, to maximize their educational value. The alien graphics I like to be aesthetic: I prefer the otherworldy look of unreality that computer graphics can create more so than photographic accuracy. Of course, the film industry has more powerful tools: they can scan 3D objects to generate 3D shapes from triangular meshes and then alter them. I do not have that luxury.

Pov-Ray also has third-party add-on tools, like Tom Aust’s ‘Tomtree’, which the author has kindly provided for general use, which enables one to construct realistic-looking trees. This is quite difficult to use at first, as you have to set a long list of parameters which alter things like bark texture, branch angle and the number of kinks in the trunk, etc. Tomtree seems to be underused by people, but with patience and the time taken to understand what the parameters do, and with a good background in observing trees in nature, good results are possible. The Pov-Ray community provides many useful tools, tips and tricks. Peter Houston’s ‘blobman’ is another one I have found useful.

There is often a degree of trial and error in Pov-Ray, especially when randomization is used to create more natural graphics and this is where the artist comes in: deciding what looks aesthetic and tweaking the parameters to bias the ‘randomness’ in the desired direction. Patience is often required: the most complex graphics I have produced took all night to render, though most take seconds or minutes. A graphic may have to be rendered and tweaked many times before it is finished.

SP:   What advice would you have for young scientists hoping to pursue, as you say in your mission statement, “science for science sake”?

BOT:  Too many people today are automatically ‘programmed’ to think the only science worth doing is that which society tells them is worth doing. Take cell biology, for example. The primary goal ought to be to understand how cells work. This is interesting in itself, and would also have countless spin-offs for medicine and engineering. Ideally we would let those who just want to study cells, do so, and those that want to use this information to develop treatments for cancer to do so too. In reality, in order to attract funding everyone seems to claim their research is just about cancer, or similar matters of interest to medicine (i.e. to the pharmaceutical industry). Too many projects have to justify themselves in this incorrect manner. This is not ‘science for science sake’. Science needs no justification. Science should not be subservient to technological industries, which have their own proper place. This increasing commandeering of science by the corporate world has, I think, also led to an exponential increase in public mistrust of science.

To do ‘science for science sake’, I would say, above all else, follow your heart and your true intellectual curiosity. Put science first, even before your own career and political standing. If you do science for any other reason then you are not doing ‘science for science sake’. If your key interest happens to be fashionable with funding bodies, then you are fortunate, if not then leave university research as I did.  Consider working for a museum. Everything that exists is worthy of study. In my opinion a true scientist puts science first. The key is to always keep an open mind and follow your own inner curiosity. Be intrinsically motivated and strive to self-actualize. Take inspiration from Mother Nature.

You may also have to: ‘Give to Caesar what is Caesar’s’! By this I mean you may have to endure financial hardship, particularly if you find the system will not let you follow your curiosity. At least this is my experience. It is a worthy trade to observe the wonders of Nature. Like the sages of old you may find yourself an ‘eremite’ wandering from abode to abode, accumulating little of financial worth, but you will learn much and see many wonders. The most productive years of my life I was without full-time employment. I was lucky that I had people to help support me. If I had remained in full-time employment, then Cronodon would not exist! Even though it is possible to pursue hobbies whilst fully employed, I needed a long span of time to contemplate, do my own research and tap my inner psyche. Science, for me, is a spiritual and intellectual quest. Working for money, or any other mundane reason, makes it hard to find one’s true inner scientist! I have simply returned to what I enjoyed so much as a child: exploring science and the wonders of Nature and imagining life on alien worlds! I am never more myself and never more fulfilled than when I follow my own scientific and artistic interests and when I work on Cronodon.

SP:  What are your future plans for Cronodon? Are you hoping to expand content in specific areas, or alter the scope of the site?

BOT: At the moment, due to time constraints, much of my work on the site has been in improving existing content, and adding the occasional new article. I intend to fill a few key gaps. There are some major groups of invertebrates I have yet to write about. I want to add more articles on quantum physics (the ones I have added have been quite popular) as there are some interesting key topics I still want to discuss. Writing these articles also helps clarify my own understanding, so I shall write new articles on whatever topics grab my interest. Some of the older articles still need reworking, and bibliographies are waiting to be added where applicable.

Up until a year ago, when I had more spare time, I was developing my Plutonium project in which the reader can interactively explore a region of space in a virtual spaceship. This project combines science fiction with science fact. It was nearing a functional level of completion but has been left hanging for the past year. There are some scripted worlds to visit and some hidden aliens to discover (aliens and worlds which not even the Google search engines have discovered so far as they are on ‘hidden’ pages the reader has to find by exploring space) and I hope to add many more soon, when time permits. This project combines two of my favorite areas of science: biology and astrophysics, with science fiction, computer programming and computer art and so seems to be the ultimate synthesis of where Cronodon has been going over the years.

Biology will likely remain the main emphasis of Cronodon, as it is my main discipline. However, I do enjoy writing physics essays, but I see no point, and indeed have no time, to write exhaustive articles on the topic complete with mathematical proofs, when so many textbooks already do this. Instead I shall target key areas that grab my interest, especially when I think I can summarize highly technical concepts and introduce them to a wider audience or provide more complete explanations. Although a trained mathematician, I usually avoid mathematical articles, though I have written some, as this expands the scope a bit too far for the time I have. However, physics textbooks often omit many steps from key proofs, expecting the reader to fill in the gaps. I may write some key articles that may be of interest to students with fuller proofs and explanations in these areas. I am planning some more mathematics and computing projects, for my own amusement, and so some articles on these topics are likely to appear.

In short, where I expand will depend where my interest takes me and what time I have available. At the same time I want to keep the current general flavour of Cronodon and not all my projects are likely to appear online. Cronodon displays only a small and selective sample of my projects and investigations into science. In the end time defeats us all, and I would really love to find a like-minded person to work with and maybe carry it on from me one day.

SP: Given the remarkable effort you put into the site, and the high quality of content you produce, do you feel that Cronodon receives the recognition it deserves?

BOT: Thank you for the positive feedback! Recognition and publicity are always welcome if they help more people find Crondon and benefit from it. Whatever recognition Cronodon deserves is not for me to judge, but individuals do send me feedback occasionally, generally positive, and some people have obtained permission to use some of my graphics in their publications. Sometimes people contact me to thank me for providing information they found hard to find elsewhere. I would like more people to find the site and find something of value in it. People do sometimes contact me to say how glad they were to find my site, but also how hard it was to find! Many of the graphics do appear in Google images given the right search phrases.

It certainly has been a lot of effort, though I could do so much more if I had the time, but I still have to work for a living, albeit part-time to cater for my somewhat ‘minimalistic’ lifestyle.

I do recall the CEO of Google saying how he would like to see more people combine science with art for educational purposes and to inspire people to become more interested in science. I feel that I am trying to do just that. Maybe I should contact Google and see if there is anymore they can do to help promote Cronodon, apart from compiling links with their search bots. I simply don’t have the money to pay for advertising. Still, I like to think that the curious will find my site eventually.

Those who have borrowed and acknowledged my images in their publications have brought a small amount of recognition. However, I don’t honestly expect Cronodon to get much recognition, whether it deserves it or not. Most people simply don’t have time for ‘science for science sake’. This negative attitude to pure science has seemingly increased over the years, driven I think by governments and media who want to control science for practical and economic gain and who see no value, for example, in knowing what lives in the sea. However, there are still many people who are genuinely interested in the world around them and if they find Cronodon useful, then that is enough. I hope that Cronodon can promote ‘science for science sake’ in a world which is increasingly ignorant of it. Whatever happens, I will continue to put a lot of effort into Cronodon simply because I enjoy it and I know that some other people do to.

Contributed by butthill

Lee, H., Kim, D., Remedios, R., Anthony, T., Chang, A., Madisen, L., Zeng, H., & Anderson, D. (2014). Scalable control of mounting and attack by Esr1+ neurons in the ventromedial hypothalamus. Nature DOI: 10.1038/nature13169

One of the more terrifying sub-genres of modern neuroscience is the study of animal aggression—specifically, the manipulation of brain circuits that produce unmitigated rage. And it’s no coincidence that David Anderson’s group at Caltech, the ruthless storm trooper horde of the ivory tower, has produced another sick pape that brings us one step closer to the production of ultra-furious super mercenaries.

Humans have been provoking animals for billions of years, but it wasn’t until the pioneering (and brutal) experiments of Phillip Bard in the 1920’s that we realized that animal rage could also be elicited by chopping out specific chunks of the brain. Bard found that surgically removing the cortex of a cat caused it to develop an angry attitude—the kitty would hiss and snarl and attack its previously beloved caretaker/experimenter. This type of behavior was called “sham rage”, because it was not directed at a specifically aggravating stimulus, but presented as a generally foul disposition toward all things great and small.

Using brain lesions in lots of cats, Bard eventually figured out that he could abolish sham rage by disconnecting the hypothalamus and the brainstem, suggesting that the hypothalamus is important for producing aggressive behavior. Walter Hess confirmed this hypothesis a couple decades later, by demonstrating that electrically stimulating the ventromedial hypothalamus is sufficient to produce sham rage. (Walter won the 1949 Nobel Prize for discovering all the crazy things that cats will do when you electrically stimulate the diencephalon.)

this cat's got the sham rage

Now, this recent pape by Hyosang Lee and colleagues has found a (somewhat) specific group of ventromedial hypothalamus neurons that are responsible for triggering unadulterated rage in the mouse. In a Nickelodeon Guts-esque feat of experimental fortitude, the authors searched for neurons that fire during mouse battles by comparing expression of an activity-dependent transcription reporter (c-Fos) to various cell-type specific markers. This led to the discovery of a population of aggression-associated hypothalamus neurons that express the estrogen-receptor (Esr1). Building a Cre knock-in mouse allowed them to virally transfect EsR1+ neurons in the hypothalamus with light-gated ion channels (ChR2 and Halo). They could then manipulate the activity of Esr1 neurons by shining light on the hypothalamus through an implanted fiber optic.

Surprisingly, the Anderson gang found that optogenetically stimulating these neurons in male mice provoked either carnal advances or attack, depending on the intensity of stimulation. At low intensities, male mice would mount other mice (of either sex), while at higher intensities, they would repeatedly sucker-punch their bewildered cage-mates. They also found that optogenetically silencing these same neurons during normal anger sessions terminated the altercation.

Together, the oppressively thorough experiments in this pape show that EsR1 neurons play a critical role in generating behaviors of passion. Of course, we still have no idea how a single population of neurons in the hypothalamus produces such complicated behavioral sequences—it is likely that they provide a gating signal to the exceedingly complex circuits in the brainstem that produce sequences of motor behavior. Working out this hierarchy is going to be a devilish task.

Let’s conclude with some deep thinking. Although most of us have come to accept the fact that our behavior is completely controlled by a goo-bag of neurons behind our eyes, there is still something implausible about the idea that driving activity in an obscure brain circuit can provoke violence. Watching these videos, you can’t help but consider how you would respond to this same manipulation—is there any amount of “self-control” that could overcome the impulse to attack? And would you feel shame or guilt while being artificially compelled to assault an innocent stranger? These are questions that will need to be answered before we can successfully engineer the hyper-wrathful super-soldiers (post-docs) that our military (the Anderson Lab) needs and deserves.

Contributed by butthill

 

Seyfarth EA (2006). Julius Bernstein (1839-1917): pioneer neurobiologist and biophysicist. Biological cybernetics, 94 (1), 2-8 PMID: 16341542

We at Sick Papes do not operate under the illusion that we are only the folks that love a dank ‘ticle. But it’s rare to find a practicing researcher who also takes the time to investigate the important historical exploits of his/her field. Throughout his career, Ernst-August Seyfarth has done just that, authoring several papes about relatively obscure hero neurobiologists such as Ludwig Mauthner (1840-1894), who discovered the infamous Mauthner cell, Julia B. Platt (1857-1935), one of the first comparative neuro-embryologists, Tilly Edinger (1897-1967), an early paleoneurologist, and Johann Flögel (1834-1918), one of the first insect neuroanatomists. At the same time, Dr. Seyfarth sustained a thriving research program studying mechanosensation in spider slit sensilla, as well as spider behavior (at one point, investigating the origin of spider push-ups).

My favorite of Seyfarth’s historical narratives is the story of Julius Bernstein, a bad-ass German physiologist who was the first to describe the action potential (despite this noteworthy achievement, I had never heard of the guy). Bernstein was born in Berlin, where, after completing medical school, he returned to do a post-doc with the perpetually belligerent Hermann von Helmholtz, a pioneer of physics, physiology, philosophy, and competitive lifelong moustache consistency. While trying to best his boss with a blend of buffed bald-head/bushy beard, Bernstein developed a brilliant contraption called a differential rheotome, or “time slicer”.

The rheotome (shown above) was a sort of “ballistic galvanometer” that consisted of a turntable that opened and closed two circuits as it rotated. One circuit transiently stimulated the nerve, and the other instantaneously recorded its response. Both the stimulation and recording periods lasted only a fraction of a millisecond, depending on the speed of the turntable. The temporal offset of the stimulus and recording epochs could also be changed by adjusting the angle between the two switches. By varying these parameters and averaging over many trials, Bernstein built up a full picture of a single action potential in a frog nerve.

After this achievement, Seyfarth describes how Bernstein went on to formulate an eerily prescient “Membrane Theory of Electrical Potentials”, based on the predictions of the hauntingly familiar Nernst equation. It’s insane how right Bernstein was most of the time. Maybe it’s just that his stupid ideas and shitty experiments were forgotten? It’s hard to say, but Seyfarth attributes Bernstein’s stellar accomplishments to his unflappable attention to detail.

Bernstein is great, but Seyfarth deserves a boatload of credit here too, for illuminating Bernstein’s illustrious career and writing it down in this sick historical pape. Now that he’s retired, I hope that Dr. Seyfarth returns to the archives to dig up more sick historical anecdotes about neurobiologists of yore.

Contributed by butthill

Dunlap, K. and Mowrer, O. H. (1930). Head movements and eye functions of birds. J. Comp. Psychol. 11, 99–113.

People make a lot of hay out of the series of photographs by Eadweard Muybridge that Governor Leland Stanford commissioned to figure out whether all four hooves of a galloping horse are airborne at the same time. Muybridge ingeniously used an array of cameras that were triggered sequentially by the horse busting through a series of trip wires. The result of this experiment was that there was definitely a moment in the horse’s gait when all four feet were off the ground. (Stanford went on to found the university that produced Olympic water polo player Tony Azevedo; Muybridge ended up shooting a man who slept with his wife, but was acquitted on grounds of “justifiable homicide”.)

Now I don’t want to dump on the birth of cinema, but I’ve just been out watching horses gallop around all morning, and I’m pretty sure that I could reproduce Muybridge’s experiment with some study drugs and a mug of properly mulled cider. Things are always clearer in hindsight, but it didn’t take much squinting to convince me that horses are airborne every quarter second or so. Perhaps Muybridge and Stanford were half blind from living in an era before proper sunglasses, or maybe horses were faster in the 19th century because there were no clocks and all the conductors had to count continuous Mississippis to keep the trains on time.

Whether or not the Muybridge horse study was necessary, subsequent developments in rapid picture-taking have proven incredibly useful for the study of biomechanics. Today I want to discuss an early example of how the camera can be used to compensate for the inability of us humans to fully appreciate animals. 

Many people have wondered, “What’s up with pigeons bobbing their heads all crazy while they walk”, but most people are too afraid to blog about it. As Dunlap and Mowrer, the authors of today’s sick pape, put it, “The forward and apparent backward movements of the head which pigeons, chickens, and certain other fowls display while walking have been commented on by various persons orally, but seldom in print.” 

It may have occurred to you that this jerky head movement is an accident of the pigeons walking gait, perhaps analogous to the swinging of a human’s arms. But this is wrong. In 1930, Dunlap and Mowrer took some great photos that proved that bird head bobbing is just an illusion. In fact, it is you, the viewer, who is lurching ferociously back and forth, and the bird is perfectly motionless! That’s not actually true. What is really happening, Dunlap and Mowrer found, is that when the bird’s body is moving, the head is completely still. In other words, the head is locked in position relative to the forward moving body. Then, when the body stops for a brief moment, the head thrusts rapidly forward to a new position. So, overall, the head is maintained in a stable position relative to the body. The stroboscopic photo above, from a sick follow-up pape by B.J. Frost in 1978, illustrates this nicely.

This head stabilization has obvious benefits for vision, as it is much more difficult to analyze a visual scene when your head is shaking. Another set of experiments by B.J. Frost in the 70’s clearly demonstrated that head-bobbing is controlled by vision, as pigeons walking on treadmills don’t bob at all (because the visual scene is stationary).

The findings of Dunlap and Mowrey in 1930, and subsequent work by B.J. Frost and other enthusiastic bird bio-mechanics, are a superb example of how the world is incredibly fast and confusing, and only photographic magic and detailed quantification can distill truth from all the chaos.

Contributed by butthill

Limits to sustained energy intake. XVIII. Energy intake and reproductive output during lactation in Swiss mice raising small litters. 
Zhao ZJ, Song DG, Su ZC, Wei WB, Liu XB, & Speakman JR (2013).
The Journal of experimental biology, 216 (Pt 12), 2349-58 PMID: 23720804

Although binging is often attributed to weak human character, a substantial binge can also help a man get in touch with his/her reckless animal roots. Whether it involves a steaming heap of elk intestines or 3 seasons of Arrested Development, there are some treats that evolution has wired animals to consume beyond the point of reasonable satiety. Giving in to these deep urges is one of the many so-called flaws that the Catholic Church utterly failed to eradicate from our animal constitution.

A recent binge was triggered by the current issue of The Journal of Experimental Biology, which contained no less than IV sick papes about mouse lactation from Dr. John Speakman and colleagues.  Further research revealed that, over the past decade, Speakman’s lab has published XVIII papers on this subject, each possessing the formulaic title: Limits to sustained energy intake., etc. This linear corpus of papes is ideally suited to sautéing an entire day in thick fatty mouse milk.

Each of these papes poses the same basic question: which factors determine an animal’s physiological limits? Speakman and colleagues study this question in lactating mice, who expend a massive amount of energy to produce milk for their thirsty pups. Two initial proposals were that milk production is limited by (I) the ability of the gut to digest food or (II) the efficiency of the mammary gland itself.

Through the first X papes in the series, Speakman and his jolly giants tested these hypotheses, as well as a couple other clever theories they dreamed up. My favorite among this back-catalogue is the evocatively titled: Limits to sustained energy intake. X. Effects of fur removal on reproductive performance in laboratory mice.

In this pape, the authors test the hypothesis that energy intake is limited by the capacity of an animal to dissipate heat. They increased the ability of lactating female mice to dissipate heat by shaving them bald as porpoises. Shaved mice ate more heartily and produced more milk, which in turn increased the size of their adorable mouse children. This result contradicted the long-held views that nursing performance is limited by the efficiency of the mother mouse’s digestion and subsequent milk production.

Although these initial results suggested that there might be one or a couple limitations to energy expenditure, the most recent papes (XIV - XVIII) show that the story is actually much more complicated. Under different environmental conditions, lactation efficiency and offspring growth are limited by several overlapping factors. There are also important differences across mouse strains. Despite the lack of simplicity in the underlying biology, the narrative organization of these XVIII papes that ask the same, seemingly basic, question, demonstrate an experimental doggedness that you got to respect.

Contributed by butthill

SICK PAPES SPECIAL ON CONTROVERSY: PART 2

Curr Biol. 2010 Sep 14;20(17):1534-8.
The role of the magnetite-based receptors in the beak in pigeon homing.
Wiltschko R, Schiffner I, Fuhrmann P, Wiltschko W.

VERSUS

Nature. 2009 Oct 29;461(7268):1274-7.
Visual but not trigeminal mediation of magnetic compass information in a migratory bird.
Zapka M, Heyers D, Hein CM, Engels S, Schneider NL, Hans J, Weiler S, Dreyer D, Kishkinev D, Wild JM, Mouritsen H.

AND

Nature. 2012 Apr 11;484(7394):367-70.
Clusters of iron-rich cells in the upper beak of pigeons are macrophages not magnetosensitive neurons.
Treiber CD, Salzer MC, Riegler J, Edelman N, Sugar C, Breuss M, Pichler P, Cadiou H, Saunders M, Lythgoe M, Shaw J, Keays DA.

There are some scientific subjects that attract recreational bedlamites like seagulls to a coastal landfill. My favorite of these is magnetoreception: the ability of an animal to perceive an ambient magnetic field. Lots of animals can do this—birds, insects, reptiles— and some of them use the earth’s weak magnetic asymmetry to achieve extraordinary feats of navigation. For example, scientific hero Ken Lohmann has shown that sea turtles navigate thousands of miles through the horrific salty ocean in order to meet their half-shelled-brethren at a specific location for an annual Bacchanalian picnic. Ken’s lab also found that if you move a spiny lobster 20 miles in any direction from its preferred hangout spot, it immediately returns directly to its headquarters using cues from the earth’s magnetic field. These and bajillions of other examples demonstrate that many of the earth’s macro-biotic inhabitants can use a magnetic sense to cruise around in magnificent style, which, in my humble opinion, is absolutely fucking fantastic.

Returning to the bedlamites. There are two dudes in particular that illustrate the fact that magnets exert a certain ineffable force upon the zanier castes of our super-organismic civilization. The first of these is shown in the video above: Mr. Harry Magnet, whose extensive pape on personal perception of magnetic fields cannot be deemed sick or otherwise, because it has not undergone rigorous peer review (but we welcome submissions).

The second example comes from Alane Jackson, the purveyor of a theory called magnetrition, which was first explained to me by a youth soccer referee who lived in a wigwam on an magnetically neutral island in the middle of an Alaskan lake. Basically, Alane’s idea is that mitochondria are magnetically charged, and that jostling our cells around causes cytoplasmic stirring, thereby promoting health. I also recommend another section of Alane’s website, titled Smoking is good.

Buried beneath all of this absolutely essential HTML is an equally intense scientific debate about the mechanisms by which real animals measure magnetic fields. So far, two basic mechanisms have been proposed:

(1) MAGNETITE. The magnetite hypothesis was inspired by the observation that some magnet-loving bacteria produce magnetite (Fe3O4) crystals that cause them to align with and cruise along the local magnetic vibe. Because magnetite has also been found in the snouts/beaks of fish and birds, it was suggested that the rotation of these crystals could be detected by mechanosensory neurons in the brain. Smaller, “superparamagnetic crystals” have also been found in bird beaks. These crystals do not have a permanent magnetic moment, and therefore do not individually rotate to align with the earth’s magnetic field. However, large arrays of these superparamagnetic crystals would attract and repulse each other under different magnetic field conditions, generating forces that could, in principle, be sensed by neurons.

(2) CRYPTOCHROME. This second mechanism is even bonkers-er. Some radical-pair chemical reactions can be influenced by magnetic forces—one example is the absorption of light by retinal photopigments called cryptochromes. The idea is that the ambient magnetic field would alter the rate of cryptochrome photo-isomerization, so that if a bird were gazing upward at a clear blue sky, it could actually “see” a hazy magnetic field image layered on top of the normal visual scene.

The argument surrounding these two mechanisms is best exemplified in the bird magnetoreception literature, which has been enriched in recent years by a flurry of combative pape-slinging. In one camp, (1) the Wiltschkos and their pals claim that birds use little magnetite particles in their beaks to detect magnetic fields, while in another camp (2) Henrik Mouritsen and his pals  claim that magnetoreception arises in the retina, mostly likely through cyptochrome. (3) David Anthony Keays and his buds weighed in on side 2 of the fracas last year, when they suggested that those magnetite particles in the beak are located inside little pieces of biological irrelevance called macrophages.

Although the field of magnetoreception is confusing and controversial, one cannot help but delight in the titillation-level of the questions and the unfettered academic shit-hurling. Magnetoreception is clearly the modern El Dorado, attracting both well-funded academics and itinerant kooks. There is the important possibility that everybody is right— that birds have two independent magnetic senses and so do people, and the booty will be split evenly amongst the Professors and the online gurus. It seems much more likely to me, however, that this entire field is booby-trapped, and that all the magnet-lovers will end up stalking monkeys on a raft as the river below their feet slowly transforms into a cauldron of boiling soup.

Contributed by butthill

Blackiston DJ, & Levin M (2013). Ectopic eyes outside the head in Xenopus tadpoles provide sensory data for light-mediated learning. The Journal of experimental biology, 216 (Pt 6), 1031-40 PMID: 23447666

Our pals in the Department of Futuristic Neuroscience have recently attracted a lot of attention for a whacky pape that demonstrated that one rat could (sort of) learn to detect signals recorded from another rat’s brain. The main finding of this study, that animals connected by electrodes tend not to ignore each other, is fuzzily heartwarming, but ranks close to Feline papillomavirus on the grand scale of illness.

A much more compelling example of the brain’s dynamic ballsiness (i.e., the ability of neural circuits to learn to detect unfamiliar sensory stimuli), is described in a recent exercise in sickness by the duo of Blackiston and Levin. These pre-pubescent-frog-loving maniacs surgically removed the eyeballs of a couple hundred tadpoles, and then transplanted donor eyeballs onto different regions of the tadpole body (fanny, haunch, etc). The donor eyeballs were labeled with a fluorescent protein (RFP), so they could monitor the axons of the transplanted optic nerve. Most of the resettled eyeballs did not successfully innervate the central nervous system, but about ¼ of them managed to connect to the gut, and another ¼ innervated the spinal cord.

Blackiston and Levin then tested the population of chimeric tadpole beasts with an associative learning task that required the tadpoles to detect light in order to avoid an electric shock. A small number of the 200 freak tadpoles could learn to avoid red light, despite the fact that they did not have normal eyes. All of the successful learners had transplanted eyeballs that innervated the spinal cord.

It’s already incredible that transplanted eyeballs can successfully wire up to the spinal cord; the fact that tadpoles can then use the whimsical retina/spinal cord circuit in a behavioral task seems, at first glance, to defy the 14th amendment of biology. But we’ve known for a long time that the nervous system is able to adapt to novel inputs. For example, the visual cortex of blind people can be colonized by auditory and somatosensory inputs, allowing them to fluently read using touch (Braille) and echolocate like bats (??).

The interesting question is not whether animals can learn to detect exogenous signals (e.g., spikes transmitted from a Brazilian rat’s brain), but how the hell the nervous system pulls out such meaningful signals of hope against the noisy background of torrential chaos and despair. This is some boring biology shit. In the meantime let’s get psyched about building an exoskeleton for the World Cup and teaching Big-Dog to throw cinder blocks.

Contributed by butthill

imageimage

Waters, J., Holbrook, C., Fewell, J., & Harrison, J. (2010). Allometric Scaling of Metabolism, Growth, and Activity in Whole Colonies of the Seed‐Harvester Ant Pogonomyrmex californicus The American Naturalist, 176 (4), 501-510 DOI: 10.1086/656266/>

We all know the feeling: You’re lying naked in a sun-soaked field after taking a fistful of mushrooms and watching waves of energy explode through your friends’ braincases. And no matter how long you watch the trees breathe, just can’t shake the question: “Where does my body end and the world begin?” Turns out this cosmic question has a hallowed tradition, and just about no one knows how to draw boundaries around a body. 

The little guys that fucks with our best minds most royally on this distinguished issue are the social Hymenoptera (ants, bees, and wasps). Dudes have been flubberbusting long and hard about whether we should think about the bees in a hive (or people in a city, or dicks in a game of dick jenga) as a wonderful communion of separate beings or as all just the dangly bits of one MegaMan. As the disturbing old saying goes, there’s many ways to skin a cat, but what perverted shitbag wants to to skin a cat a bunch of different ways? So the world was on the verge of turning its back forever on this age old question and exploding in a supernova of its own ignorance.

That is until some brave souls (Dr. James Waters and colleagues) figured out the illest of ways to blow the lid off a part of this problem. But let me slow my roll a bit and fill in the rubbly background that makes it crystalline just way this pape is so sick:

For just about forever, we’ve known one thing about bodies for sure: how fast they use up energy (their metabolic rate) has a crazy strong relationship with how big they are. Specifically, bigger things use less energy per unit body mass than small things. So basically, one giant 100 kg rat should be using up energy much slower than 100 puny 1 kg rats, even though the grand total of rat meat is the same in both cases.

So, what these dudes did was investigate this same problem in ants, the superest of superorganisms. In an ant colony, you should be able to predict how much energy the whole colony is using based on their average body mass (e.g. you should be able to just sum up the metabolic rate of a bunch of small ants). But when they put whole colonies of these little guys in a fancy box that measures how fast they’re using up their cosmic energies, turns out they’re doing exactly not that. Specifically, their metabolic rate is what you’d predict for a single organism that had the collective mass of all the ants. And metabolic rate changes with colony size the same way it does for bigger bodies. So, in summary, ants (a) are fucking crazy, and (b) on both the mystical and physical planes appear to be working just like a single, physically integrated body does. Why? Lord knows. But this paper is opening up ways to answer that question and new ways to think about the most basic aspects of how organisms are put together. Sick.

Oh, and ants do this too.

Contributed by jamescrall

Krieger J, Grandy R, Drew MM, Erland S, Stensmyr MC, Harzsch S, & Hansson BS (2012). Giant Robber Crabs Monitored from Space: GPS-Based Telemetric Studies on Christmas Island (Indian Ocean). PloS one, 7 (11) PMID: 23166774

Anybody who knows me will tell you that I have a soft spot in my heart for the hard shell of our fellow crab-man. For all the land-lubbers out there, the crab is a heavily-armored, sideways-running little fellow that specializes in shoveling detritus (= trash) into its adorable little mouth with an often over-sized claw appendage. To me, the main appeal of the crab is its dignified air of feistiness. Unlike most softy animals, crabs do not like to be handled, and if you pick them up they will pinch you with all the hatred they can muster. Crabs also have beautiful brains and, as we shall see, possess a unique brand of crusty intelligence.

There are all sorts of freaky crabs out there, but the most inspiring is the absurdly proportioned giant robber crab that resides on Christmas Island in the South Pacific. These friggin crabs can weigh about 10 lbs, they climb trees, and they rip apart coconuts and devour them like their mike’s and ike’s (sic). This cushy crab lifestyle allows them to live to the ripe age of 60. Some folks in the recently prolific Hansson Lab at the Max Planck Institute for Chemical Ecology somehow convinced somebody to let them go to Christmas Island and study the navigational abilities of giant robber crabs. Their experimental protocol went as follows:

  1. Snatch a robber crab
  2. Glue a GPS tracking device to its carapace
  3. Sit back and watch where it goes via satellite transmission
  4. Snatch the crab again, put in a trash bag, transport it across the island, and release it
  5. See if the crab can get back home

This pape clearly demonstrates that robber crabs live a rambling lifestyle. After spending a few days or weeks in one area, a crab will get the itch to roam, and will pick up and haul his barnacled ass from the inland rainforest to the seashore. After a spell at the shore, he’ll pack up and hitch back into the rainforest. Over time, robber crabs learn preferred routes that they repeatedly traverse throughout their long lives. Most remarkably, if you put a crab in a trash bag and haul it a mile away, it will almost immediately return to the spot where you snatched it.

Aside from the obvious conclusion that robber crabs are dynamic, intelligent beasts, this pape also establishes the robber crab as an important model system for studying what it means to live a deeply fulfilling life. 

Contributed by butthill
Druckmann, S., &amp; Chklovskii, D. (2012). Neuronal Circuits Underlying Persistent Representations Despite Time Varying Activity. Current Biology, 22 (22), 2095-2103 DOI: 10.1016/j.cub.2012.08.058
To celebrate the dawn of December, a month of intense introspection and widespread brooding, Sick Papes brings you an exclusive soul-wrenching interview with neuroscientist and celebrity theoretician, Dr. Shaul Druckmann. Shaul’s recent pape (w/ Mitya Chklovskii) suggests a fresh answer to a beguiling question- how does the brain maintain persistent representations despite the fact that neuronal activity is constantly changing?
Personal experience tells us that the brain can maintain stable representations of images, numbers, and ideas for seconds and minutes. However, the activity of neurons in brain regions thought to be involved in working memory, such as prefrontal cortex, varies on a much faster time scale, (~10-50 milliseconds). Shaul’s pape proposes a network model, called FEVER, which can maintain persistent representations even as the activity of individual neurons varies. It turns out that this network model has many features in common with the organization of real cortical networks.
SP: If I&#8217;ve got my mules in order, your model network is constructed such that the receptive field of each neuron is equivalent to a weighted sum of the receptive fields of all other neurons in the network, and the weights in this weighted sum are the strength of synaptic connections between neurons. This allows the activity of individual neurons to vary, while the output of the network remains constant. This structure seems precarious. If I were to go into your brain and cut one single synaptic connection, how would this affect stable representations in a dense FEVER network? In other words, how robust is this network to wanton destruction?
 SD: Yup, your mules are definitely in order and marching. As you say, destroying synaptic connections will momentarily throw the network off balance. However, since the representation is highly overlapping and there are many ways to represent each stimulus there would be no problem readjusting the network so as to ignore the destroyed part of the network. Given the high degree of overcompleteness that we suspect exists in cortex, there is a lot of room to recover from damage.
SP: In his Tractatus, Wittgenstein proposes that, “A logical picture of facts is a thought”; in other words, that thoughts must adhere to the same logical form as things in the real world. Agree or disagree?
SD: Wittgenstein huh? I am not sure I can even properly pronounce his name, much less understand his writings. The end of my serious reading of philosophical literature timeline is more or less with Kant… Regardless, I am not sure I read the sentence the same way you do. &#8220;A logical picture of facts is a thought&#8221;. First, I like the stress on the term &#8220;picture of facts&#8221; which for me relates the thought to the many aspects of taking a picture: we select what to put in our frame and what to keep out, the lighting we throw on the objects matters a lot as well as the angle and ultimately it needs to be developed to become a real thing (okay maybe the last one was a stretch). Regarding what thoughts must adhere to, I am not sure thoughts are under control, so lets read &#8220;thoughts&#8221; as &#8220;theories&#8221;. I strongly believe that theories must first and foremost have a sound logical structure. In one interpretation that is pretty straightforward since it just means that the math needs to check out. However, I believe that, somewhat related to that sentence, one of the most interesting things about theories is that they rearrange facts that we thought we previously knew into a new order. If that new order makes more &#8220;sense&#8221; and teaches you (the experts) new things about the facts then the theory is actually valuable. Anyhow, this sounds like something better talked about over a beer…
SP: Your pape addresses how a brain might hold onto specific representations for periods of seconds, even as the activity of individual neurons varies wildly during this period. A slightly different problem is how human thought and perception seems to occur on the time-scale of seconds, despite the fact that neural activity varies on the order of milliseconds. Do you think this is simply a matter of perception, or do evolving network dynamics across longer time scales matter?
SD: Actually our first draft discussed that briefly, but reviewers hated it since it was too speculative. I think there are two possibilities, one is that representation is constantly changing, but there is a little leprechaun working really hard in our brain all the time to make sure our conscious perception is smooth (this may sounds crazy, but think change-detection blindness). The other is that the networks themselves bridge the gap between the time scale of neural activity (milliseconds) and the time scale of the world (seconds say) by mechanisms such as the one we describe in order to allow downstream circuits a smooth readout of the representation and the leprechaun to have a much more relaxed life. Which is true? I really don&#8217;t know.
SP: When you are building a model, do you start with the acronym first and work backward? Or do you build the model first and then tweak it until it fits with a catchy acronym?
SD: Given the allowed artistic freedom of basically picking any random word and letter within it for an acronym it is pretty easy to find one once the work is done. But what you suggests sounds fun, randomly thinking up an acronym, finding the most reasonable sentence you can attach to it and seeing whether that inspires and idea worth working through.
SP: Do you think the phrase “persistent representation” accurately describes what is happening in the brain during working memory? For example, remembering a phone number requires a certain amount of active rehearsal, and is susceptible to distraction. Why must prefrontal cortex maintain a representation within itself, rather than relying on repeated structured inputs from other sensory networks?
SD: In the delayed-match-to-sample working memory task design as much as possible is done to eliminate the possibility of input driven memory (turning stimulus on only transiently, long delay periods). Therefore, that is less of an option in my opinion. More generally though, if it is an input driven memory then one has to answer the question how does whatever circuit that provides the input keep its ability to provide an input for such a long time despite the transient stimulus. Then all our explanations would need to be shifted to that area. I don&#8217;t think it has been worked out in an airtight manner that this isn&#8217;t a possibility, but I think it is less likely.
SP: In Borges’ story, &#8221;Funes the Memorious&#8221;, a young boy falls off a horse and loses his ability to forget. His life is haunted by the banal details of every moment he has ever experienced, including all the associated physical and emotional sensations. Are there certain conditions under which a FEVER network architecture could result in such a condition?
SD: Good point! In fact the way we develop the math in the first section leads to a network with an infinite integration, which is exactly Borges&#8217; idea, sans the horse. That&#8217;s why we later add the scaling factor to the equation that allows you to have a very long, but not infinite, time constant. Otherwise, with an infinite time constant one would run into  all kinds of problems such as saturation due to the integration of all the (banal) past stimuli ever encountered.
SP: One method to test the relevance of the FEVER network is to compare the synaptic structure of a cortical network to the range of eigenvalues predicted by the model. Are there any unexpected features of the eigenspectrum that you could look for in real cortical networks? You mention a few in the paper that support your model (e.g., prevalence of reciprocal connections), but are there others that would be worth looking for?
SD: In terms of synaptic reconstruction, I think the neat thing to do is to try to map the receptive field of neurons and then do EM reconstruction a la Denk. Then one option is trace down the axon of a single cell, find all the post-synaptic cells, sum up their receptive fields and see if you come up with the original neuron&#8217;s own receptive field (I guess you could do it with trans-synaptic viruses in principle too). The tricky part is that you need to know the weight of the connection, which might not be easy/possible from EM (actually everything about that idea is tricky). More generally, I think the most interesting concept to look for is the idea of coding vs. non-coding directions in activity space which our theory suggests. Not all activity patterns were created equal! I believe this has serious implications for how to interpret multi-neuron population recordings and that is something I want to take a closer look at.
SP: What is the sickest pape you have read in the last 2 months?
SD: Sickest pape: ice in Mercury&#8217;s north pole. Ice was apparently delivered by comets or asteroids! Surface temperatures of 400 celsius (not in the shade) but alien (to mercury) ice in the deep shade still survived. How cool is that?

Druckmann, S., & Chklovskii, D. (2012). Neuronal Circuits Underlying Persistent Representations Despite Time Varying Activity. Current Biology, 22 (22), 2095-2103 DOI: 10.1016/j.cub.2012.08.058

To celebrate the dawn of December, a month of intense introspection and widespread brooding, Sick Papes brings you an exclusive soul-wrenching interview with neuroscientist and celebrity theoretician, Dr. Shaul Druckmann. Shaul’s recent pape (w/ Mitya Chklovskii) suggests a fresh answer to a beguiling question- how does the brain maintain persistent representations despite the fact that neuronal activity is constantly changing?

Personal experience tells us that the brain can maintain stable representations of images, numbers, and ideas for seconds and minutes. However, the activity of neurons in brain regions thought to be involved in working memory, such as prefrontal cortex, varies on a much faster time scale, (~10-50 milliseconds). Shaul’s pape proposes a network model, called FEVER, which can maintain persistent representations even as the activity of individual neurons varies. It turns out that this network model has many features in common with the organization of real cortical networks.

SP: If I’ve got my mules in order, your model network is constructed such that the receptive field of each neuron is equivalent to a weighted sum of the receptive fields of all other neurons in the network, and the weights in this weighted sum are the strength of synaptic connections between neurons. This allows the activity of individual neurons to vary, while the output of the network remains constant. This structure seems precarious. If I were to go into your brain and cut one single synaptic connection, how would this affect stable representations in a dense FEVER network? In other words, how robust is this network to wanton destruction?

SD: Yup, your mules are definitely in order and marching. As you say, destroying synaptic connections will momentarily throw the network off balance. However, since the representation is highly overlapping and there are many ways to represent each stimulus there would be no problem readjusting the network so as to ignore the destroyed part of the network. Given the high degree of overcompleteness that we suspect exists in cortex, there is a lot of room to recover from damage.

SP: In his Tractatus, Wittgenstein proposes that, “A logical picture of facts is a thought”; in other words, that thoughts must adhere to the same logical form as things in the real world. Agree or disagree?

SD: Wittgenstein huh? I am not sure I can even properly pronounce his name, much less understand his writings. The end of my serious reading of philosophical literature timeline is more or less with Kant… Regardless, I am not sure I read the sentence the same way you do. “A logical picture of facts is a thought”. First, I like the stress on the term “picture of facts” which for me relates the thought to the many aspects of taking a picture: we select what to put in our frame and what to keep out, the lighting we throw on the objects matters a lot as well as the angle and ultimately it needs to be developed to become a real thing (okay maybe the last one was a stretch). Regarding what thoughts must adhere to, I am not sure thoughts are under control, so lets read “thoughts” as “theories”. I strongly believe that theories must first and foremost have a sound logical structure. In one interpretation that is pretty straightforward since it just means that the math needs to check out. However, I believe that, somewhat related to that sentence, one of the most interesting things about theories is that they rearrange facts that we thought we previously knew into a new order. If that new order makes more “sense” and teaches you (the experts) new things about the facts then the theory is actually valuable. Anyhow, this sounds like something better talked about over a beer…

SP: Your pape addresses how a brain might hold onto specific representations for periods of seconds, even as the activity of individual neurons varies wildly during this period. A slightly different problem is how human thought and perception seems to occur on the time-scale of seconds, despite the fact that neural activity varies on the order of milliseconds. Do you think this is simply a matter of perception, or do evolving network dynamics across longer time scales matter?

SD: Actually our first draft discussed that briefly, but reviewers hated it since it was too speculative. I think there are two possibilities, one is that representation is constantly changing, but there is a little leprechaun working really hard in our brain all the time to make sure our conscious perception is smooth (this may sounds crazy, but think change-detection blindness). The other is that the networks themselves bridge the gap between the time scale of neural activity (milliseconds) and the time scale of the world (seconds say) by mechanisms such as the one we describe in order to allow downstream circuits a smooth readout of the representation and the leprechaun to have a much more relaxed life. Which is true? I really don’t know.

SP: When you are building a model, do you start with the acronym first and work backward? Or do you build the model first and then tweak it until it fits with a catchy acronym?

SD: Given the allowed artistic freedom of basically picking any random word and letter within it for an acronym it is pretty easy to find one once the work is done. But what you suggests sounds fun, randomly thinking up an acronym, finding the most reasonable sentence you can attach to it and seeing whether that inspires and idea worth working through.

SP: Do you think the phrase “persistent representation” accurately describes what is happening in the brain during working memory? For example, remembering a phone number requires a certain amount of active rehearsal, and is susceptible to distraction. Why must prefrontal cortex maintain a representation within itself, rather than relying on repeated structured inputs from other sensory networks?

SD: In the delayed-match-to-sample working memory task design as much as possible is done to eliminate the possibility of input driven memory (turning stimulus on only transiently, long delay periods). Therefore, that is less of an option in my opinion. More generally though, if it is an input driven memory then one has to answer the question how does whatever circuit that provides the input keep its ability to provide an input for such a long time despite the transient stimulus. Then all our explanations would need to be shifted to that area. I don’t think it has been worked out in an airtight manner that this isn’t a possibility, but I think it is less likely.

SP: In Borges’ story, ”Funes the Memorious”, a young boy falls off a horse and loses his ability to forget. His life is haunted by the banal details of every moment he has ever experienced, including all the associated physical and emotional sensations. Are there certain conditions under which a FEVER network architecture could result in such a condition?

SD: Good point! In fact the way we develop the math in the first section leads to a network with an infinite integration, which is exactly Borges’ idea, sans the horse. That’s why we later add the scaling factor to the equation that allows you to have a very long, but not infinite, time constant. Otherwise, with an infinite time constant one would run into  all kinds of problems such as saturation due to the integration of all the (banal) past stimuli ever encountered.

SP: One method to test the relevance of the FEVER network is to compare the synaptic structure of a cortical network to the range of eigenvalues predicted by the model. Are there any unexpected features of the eigenspectrum that you could look for in real cortical networks? You mention a few in the paper that support your model (e.g., prevalence of reciprocal connections), but are there others that would be worth looking for?

SD: In terms of synaptic reconstruction, I think the neat thing to do is to try to map the receptive field of neurons and then do EM reconstruction a la Denk. Then one option is trace down the axon of a single cell, find all the post-synaptic cells, sum up their receptive fields and see if you come up with the original neuron’s own receptive field (I guess you could do it with trans-synaptic viruses in principle too). The tricky part is that you need to know the weight of the connection, which might not be easy/possible from EM (actually everything about that idea is tricky). More generally, I think the most interesting concept to look for is the idea of coding vs. non-coding directions in activity space which our theory suggests. Not all activity patterns were created equal! I believe this has serious implications for how to interpret multi-neuron population recordings and that is something I want to take a closer look at.

SP: What is the sickest pape you have read in the last 2 months?

SD: Sickest pape: ice in Mercury’s north pole. Ice was apparently delivered by comets or asteroids! Surface temperatures of 400 celsius (not in the shade) but alien (to mercury) ice in the deep shade still survived. How cool is that?

Contributed by butthill
hit counters
hit counter