Sick Papes

Oct 03

Rich, P., Liaw, H., & Lee, A. (2014). Large environments reveal the statistical structure governing hippocampal representations. Science, 345 (6198), 814-817 DOI: 10.1126/science.1255635
Have you ever felt lost and alone? If so, this experience probably involved your hippocampus, a seahorse-shaped structure in the middle of the brain. About 40 years ago, scientists with electrodes discovered that some neurons in the hippocampus fire each time an animal passes through a particular location in its environment. These neurons, called place cells, are thought to function as a cognitive map that enables navigation and spatial memory.
Place cells are typically studied by recording from the hippocampus of a rodent navigating through a laboratory maze. But in the real world, rats can cover a lot of ground. For example, many rats leave their filthy sewer bunkers every night to enter the cozy bedrooms of innocent sleeping children.
In a recent paper, esteemed neuroscientist Dr. Dylan Rich and colleagues investigated how place cells encode very large spaces. Specifically, they asked: how are new place cells recruited to the network as a rat explores a truly giant maze? Today, we huddle closely with Dr. Rich to whisper about his findings.
SP:  Do you remember when and how you first learned about hippocampal place cells? Did your interest in the hippocampus burgeon immediately, or ripen like a stubborn avocado?
Not exactly, only that it was during my undergraduate studies. I was on a neuroscience degree course and there was quite a lot of cellular and molecular neuroscience. I was interested in the more cognitive aspects of neuroscience, but I wanted to still be studying the brain per se rather than psychology.  The hippocampus and place cells seemed as though it had the higher-level cognitive aspects, but was still very much grounded to the physiology, to neurons and spikes. Some of the lecturers who taught were rodent hippocampal researchers, so I was exposed to a lot of the experimental and computational work on the hippocampus. It was also around that time that grid cells had just been discovered, so I think some of the excitement in the field trickled down to us undergraduates.
SP:  The experiments in this paper required you to record from the brains of rats running through huge mazes, up to 48 m long. Where and how did you set this up? 
DR: When we were planning this experiment, none of our normal lab rooms were large enough. One evening we wandered around prospective parts of the building, the back service corridors and such, asking “can we do it here?”, “where would we mount the cameras?” et cetera. Eventually we found a suitable space, the cage washing room in the animal facility. The trouble was that it was in full use during the week for cleaning animal cages; we were only able to use it on weekends. So, when we had an animal ready to go, we would have to set up the whole experiment just for the weekend. Starting Friday evening, we’d move everything down and set it all up, recording rig, cameras, cables, maze pieces, the lot. We ran the experiment all weekend then cleared the room in time for the start of work on Monday. Since we couldn’t really plan exactly how things would go, initially there were a lot of trips to the hardware store. We’d have some problem, and think “pipe clamps, we need pipe clamps!”, then go to the store to get pipe clamps, and by the time we got back there would be something else we needed! After the first few times we got the setup and teardown running smoothly, and then we were able to concentrate on running the experiment well.
SP:  By looking at place cell recruitment in these big mazes, you discovered that the formation of place fields is an independent Poisson process with its rate drawn from a gamma distribution. In describing the gamma-Poisson process, you cite a book called On Random Processes and Their Application to Sickness and Accident Statistics by O. Lundberg. What’s up with that book?
DR: Ove Lundberg was a Swedish mathematician who did some important work on Poisson processes. The major motivation of his work was in its application to the insurance industry. This branch of mathematics, known as risk theory, had actually been pioneered by his father Filip Lundberg. 
Prior to Ove Lundberg, in the 1920s, British mathematicians Major Greenwood and Udny Yule had developed the gamma-Poisson model trying to understand the distributions observed amongst accidents in factory workers. They concluded that preexisting differences offered the best explanation for the differences they saw in the number of accidents each individual suffered. At around the same time, the Hungarian mathematician George Polya was working on different mathematical models that gave similar distributions. In these models individuals would start from an identical baseline, and the occurrence of an accident (or more generally an event) would increase the future likelihood of an accident for that individual.  These types of model, which were framed in terms of drawing balls from an urn, became known as cumulative advantage or rich-get-richer models. Ove Lunberg was the first to show that these two mechanisms, preexisting differences or cumulative advantage, could actually give rise to the same stochastic process, even though the starting assumptions were very different.
I had been struggling with simulations of models in an attempt to distinguish between preexisting differences or a cumulative advantage mechanism in our data. When I found out that someone had solved the exact same problem, it was a crucial piece of information in interpreting our data.
The reference itself was actually a bit of a pain. Since it was a book, the journal wanted the whole title to be in the shortened reference list, which counted against our word limit.  So we were always joking about how much it really needed to be in, because we could get another 10 words or so if we got rid of it. But it was an important piece of the puzzle, so it always stayed in. 
SP:  We are conducting this interview within a helicopter zooming across middle America. Do your results tell us anything about how place cells might represent extremely large spaces, such as the view from this helicopter space shuttle? Does your model predict an upper limit on spatial representation in the brain?
DR: This is an interesting implication of our data; it gives us an idea of what strategies might be employed in the hippocampus to allow the representation of very large spaces. The logarithmic-like recruitment of cells suggests that the hippocampal representation is able to encode a large range of environmental sizes. Also the variation of each cell’s propensity may be a way in which the hippocampus may represent different environmental scales; for instance cells with few fields, spaced far apart allow a coarse localization of position, whereas those with more fields give the fine spatial resolution once you know roughly where you are. 
From the model, we can predict the behavior of place cells in environment much larger than we examined. For instance, we can extrapolate the recruitment of cells as a function of environmental size; we can read it off and see where it hits 95% or 99%. Would that length be the “upper limit” of the representation? Not necessarily. Even with almost all cells recruited as place cells, there are a huge number of locations that could be coded uniquely across the population using certain types of combinatorial code. One would have to specify a particular code in order to begin to think about an estimate of capacity in this case.
SP: In your experiment, the rat is entering an entirely novel environment, and so does not know how large the maze is going to be. But if he did have some prior expectation about the scale or shape of the maze, how do you think this would affect the rate of place cell recruitment?
DR: It’s definitely possible that an expectation of environment size may modulate some aspect of the place field representation; it makes sense for the hippocampus to optimize coding given the available information it has. However an interesting aspect of the model we describe is that it seems to be able to cope quite well with environments of different sizes even with fixed parameters. This may be a mechanism used to cope with the inherent uncertainty in the world; you never know exactly how large or small a place will be, so you have to account for either possibility.
SP: Sensory systems also use logarithmic compression to efficiently compress the broad range of input signals into the limited bandwidth of neurons. For example, the retina encodes luminance signals across nine orders of magnitude. Do you think that the logarithmic recruitment of place cells might arise from logarithmic compression in the sensory circuits that provide contextual signals to the hippocampus?
DR: Hippocampal coding is invariant to the specifics of the sensory input, place cells do not respond in a simple manner to sensory stimuli, but rather seem to integrate from a wide array of information in order to respond to place per se. Given this higher-level representation and the number of processing steps potentially involved, I’m not sure that something such as stimulus intensity (which is coded logarithmically), would directly relate to the size of the environment explored (which corresponds logarithmic-like cellular recruitment). Although the similarities may represent a common neural optimization principle, for instance, as you suggest the efficient use of a limited channel.            
SP: As a sci-fi connoisseur, you are probably familiar with Asimov’s The Gods Themselves, in which scientists discover and interact with a universe that has very different physical laws from our own. In what ways is the hippocampus designed specifically for our 4-d universe, and how might it be modified to accommodate additional dimensions?
DR: How hippocampal function is adapted specifically for 3+1D environments is a tough question. One theory is that the hippocampus forms the basis of a cognitive map that allows navigation in space and also time. For instance as you go to pick up the dry cleaning, you remember that the traffic along your planned route will be bad at this time of day, so you think of a shortcut to get there faster. In order to perform such a navigational computation the hippocampus may have to draw on a geometric representation it has of space. We don’t really know how it actually does this however. Looking at differences between animals that navigate exclusively in 2 or 3 spatial dimensions would probably provide the best clues about how dimensionality is adapted for.
I don’t think we would be able to ever perceive higher spatial dimensions in the same way as we do for normal space and time. This is because there has been zero evolutionary selection of such abilities. I would guess that a system for the general n-dimensional condition would be more complex than a system for a special case, and so would be selected against during evolution; I think is likely that we are hard coded to our native dimensionality.
We might be able to get around such limitations by using some tricks. Anecdotally, humans can do pretty well in video games that employ different geometries (think Pacman exiting one side of the screen and coming back in on the other side, or teleportation mechanics in other games), so perhaps we are able to chuck complex unnatural environments into smaller independent sections that we are able to deal with. Another idea is that we could co-opt other systems in the brain. For instance, we are quite adept at navigating our high-dimensional social environments. Perhaps one could try and think about a high-dimensional space using social metaphor?
Another sci-fi classic, Flatland by Edwin Abbott Abbott, deals explicitly with this question, how one might comprehend spatial dimensions higher than one’s own. I won’t spoil the story, but will highly recommend it for anyone interested in these questions.
SP: Does it make you sad to think that plants have no idea where they are?

DR: I’m not that well-disposed to plants in general since a large hydrangea pushed in front of me in line in the post office recently, so no.

Rich, P., Liaw, H., & Lee, A. (2014). Large environments reveal the statistical structure governing hippocampal representations. Science, 345 (6198), 814-817 DOI: 10.1126/science.1255635

Have you ever felt lost and alone? If so, this experience probably involved your hippocampus, a seahorse-shaped structure in the middle of the brain. About 40 years ago, scientists with electrodes discovered that some neurons in the hippocampus fire each time an animal passes through a particular location in its environment. These neurons, called place cells, are thought to function as a cognitive map that enables navigation and spatial memory.

Place cells are typically studied by recording from the hippocampus of a rodent navigating through a laboratory maze. But in the real world, rats can cover a lot of ground. For example, many rats leave their filthy sewer bunkers every night to enter the cozy bedrooms of innocent sleeping children.

In a recent paper, esteemed neuroscientist Dr. Dylan Rich and colleagues investigated how place cells encode very large spaces. Specifically, they asked: how are new place cells recruited to the network as a rat explores a truly giant maze? Today, we huddle closely with Dr. Rich to whisper about his findings.

SP:  Do you remember when and how you first learned about hippocampal place cells? Did your interest in the hippocampus burgeon immediately, or ripen like a stubborn avocado?

Not exactly, only that it was during my undergraduate studies. I was on a neuroscience degree course and there was quite a lot of cellular and molecular neuroscience. I was interested in the more cognitive aspects of neuroscience, but I wanted to still be studying the brain per se rather than psychology.  The hippocampus and place cells seemed as though it had the higher-level cognitive aspects, but was still very much grounded to the physiology, to neurons and spikes. Some of the lecturers who taught were rodent hippocampal researchers, so I was exposed to a lot of the experimental and computational work on the hippocampus. It was also around that time that grid cells had just been discovered, so I think some of the excitement in the field trickled down to us undergraduates.

SP:  The experiments in this paper required you to record from the brains of rats running through huge mazes, up to 48 m long. Where and how did you set this up? 

DR: When we were planning this experiment, none of our normal lab rooms were large enough. One evening we wandered around prospective parts of the building, the back service corridors and such, asking “can we do it here?”, “where would we mount the cameras?” et cetera. Eventually we found a suitable space, the cage washing room in the animal facility. The trouble was that it was in full use during the week for cleaning animal cages; we were only able to use it on weekends. So, when we had an animal ready to go, we would have to set up the whole experiment just for the weekend. Starting Friday evening, we’d move everything down and set it all up, recording rig, cameras, cables, maze pieces, the lot. We ran the experiment all weekend then cleared the room in time for the start of work on Monday. Since we couldn’t really plan exactly how things would go, initially there were a lot of trips to the hardware store. We’d have some problem, and think “pipe clamps, we need pipe clamps!”, then go to the store to get pipe clamps, and by the time we got back there would be something else we needed! After the first few times we got the setup and teardown running smoothly, and then we were able to concentrate on running the experiment well.

SP:  By looking at place cell recruitment in these big mazes, you discovered that the formation of place fields is an independent Poisson process with its rate drawn from a gamma distribution. In describing the gamma-Poisson process, you cite a book called On Random Processes and Their Application to Sickness and Accident Statistics by O. Lundberg. What’s up with that book?

DR: Ove Lundberg was a Swedish mathematician who did some important work on Poisson processes. The major motivation of his work was in its application to the insurance industry. This branch of mathematics, known as risk theory, had actually been pioneered by his father Filip Lundberg.

Prior to Ove Lundberg, in the 1920s, British mathematicians Major Greenwood and Udny Yule had developed the gamma-Poisson model trying to understand the distributions observed amongst accidents in factory workers. They concluded that preexisting differences offered the best explanation for the differences they saw in the number of accidents each individual suffered. At around the same time, the Hungarian mathematician George Polya was working on different mathematical models that gave similar distributions. In these models individuals would start from an identical baseline, and the occurrence of an accident (or more generally an event) would increase the future likelihood of an accident for that individual.  These types of model, which were framed in terms of drawing balls from an urn, became known as cumulative advantage or rich-get-richer models. Ove Lunberg was the first to show that these two mechanisms, preexisting differences or cumulative advantage, could actually give rise to the same stochastic process, even though the starting assumptions were very different.

I had been struggling with simulations of models in an attempt to distinguish between preexisting differences or a cumulative advantage mechanism in our data. When I found out that someone had solved the exact same problem, it was a crucial piece of information in interpreting our data.

The reference itself was actually a bit of a pain. Since it was a book, the journal wanted the whole title to be in the shortened reference list, which counted against our word limit.  So we were always joking about how much it really needed to be in, because we could get another 10 words or so if we got rid of it. But it was an important piece of the puzzle, so it always stayed in.

SP:  We are conducting this interview within a helicopter zooming across middle America. Do your results tell us anything about how place cells might represent extremely large spaces, such as the view from this helicopter space shuttle? Does your model predict an upper limit on spatial representation in the brain?

DR: This is an interesting implication of our data; it gives us an idea of what strategies might be employed in the hippocampus to allow the representation of very large spaces. The logarithmic-like recruitment of cells suggests that the hippocampal representation is able to encode a large range of environmental sizes. Also the variation of each cell’s propensity may be a way in which the hippocampus may represent different environmental scales; for instance cells with few fields, spaced far apart allow a coarse localization of position, whereas those with more fields give the fine spatial resolution once you know roughly where you are.

From the model, we can predict the behavior of place cells in environment much larger than we examined. For instance, we can extrapolate the recruitment of cells as a function of environmental size; we can read it off and see where it hits 95% or 99%. Would that length be the “upper limit” of the representation? Not necessarily. Even with almost all cells recruited as place cells, there are a huge number of locations that could be coded uniquely across the population using certain types of combinatorial code. One would have to specify a particular code in order to begin to think about an estimate of capacity in this case.

SP: In your experiment, the rat is entering an entirely novel environment, and so does not know how large the maze is going to be. But if he did have some prior expectation about the scale or shape of the maze, how do you think this would affect the rate of place cell recruitment?

DR: It’s definitely possible that an expectation of environment size may modulate some aspect of the place field representation; it makes sense for the hippocampus to optimize coding given the available information it has. However an interesting aspect of the model we describe is that it seems to be able to cope quite well with environments of different sizes even with fixed parameters. This may be a mechanism used to cope with the inherent uncertainty in the world; you never know exactly how large or small a place will be, so you have to account for either possibility.

SP: Sensory systems also use logarithmic compression to efficiently compress the broad range of input signals into the limited bandwidth of neurons. For example, the retina encodes luminance signals across nine orders of magnitude. Do you think that the logarithmic recruitment of place cells might arise from logarithmic compression in the sensory circuits that provide contextual signals to the hippocampus?

DR: Hippocampal coding is invariant to the specifics of the sensory input, place cells do not respond in a simple manner to sensory stimuli, but rather seem to integrate from a wide array of information in order to respond to place per se. Given this higher-level representation and the number of processing steps potentially involved, I’m not sure that something such as stimulus intensity (which is coded logarithmically), would directly relate to the size of the environment explored (which corresponds logarithmic-like cellular recruitment). Although the similarities may represent a common neural optimization principle, for instance, as you suggest the efficient use of a limited channel.           

SP: As a sci-fi connoisseur, you are probably familiar with Asimov’s The Gods Themselves, in which scientists discover and interact with a universe that has very different physical laws from our own. In what ways is the hippocampus designed specifically for our 4-d universe, and how might it be modified to accommodate additional dimensions?

DR: How hippocampal function is adapted specifically for 3+1D environments is a tough question. One theory is that the hippocampus forms the basis of a cognitive map that allows navigation in space and also time. For instance as you go to pick up the dry cleaning, you remember that the traffic along your planned route will be bad at this time of day, so you think of a shortcut to get there faster. In order to perform such a navigational computation the hippocampus may have to draw on a geometric representation it has of space. We don’t really know how it actually does this however. Looking at differences between animals that navigate exclusively in 2 or 3 spatial dimensions would probably provide the best clues about how dimensionality is adapted for.

I don’t think we would be able to ever perceive higher spatial dimensions in the same way as we do for normal space and time. This is because there has been zero evolutionary selection of such abilities. I would guess that a system for the general n-dimensional condition would be more complex than a system for a special case, and so would be selected against during evolution; I think is likely that we are hard coded to our native dimensionality.

We might be able to get around such limitations by using some tricks. Anecdotally, humans can do pretty well in video games that employ different geometries (think Pacman exiting one side of the screen and coming back in on the other side, or teleportation mechanics in other games), so perhaps we are able to chuck complex unnatural environments into smaller independent sections that we are able to deal with. Another idea is that we could co-opt other systems in the brain. For instance, we are quite adept at navigating our high-dimensional social environments. Perhaps one could try and think about a high-dimensional space using social metaphor?

Another sci-fi classic, Flatland by Edwin Abbott Abbott, deals explicitly with this question, how one might comprehend spatial dimensions higher than one’s own. I won’t spoil the story, but will highly recommend it for anyone interested in these questions.

SP: Does it make you sad to think that plants have no idea where they are?

DR: I’m not that well-disposed to plants in general since a large hydrangea pushed in front of me in line in the post office recently, so no.

Sep 05

[video]

Aug 13

Hastings, J. W. (2001). Fifty years of fun. Journal of Biological Rhythms. (Vol. 16, pp. 5–18). [PDF]
Woody Hastings, who passed away last week at the age of 87, ran his lab down the hall from where I worked as a graduate student. He was already retired by the time I began, but over the years I would occasionally pass him in the hallway, where he still kept an office with his name on the door. I had no idea who he was or what he had studied, but for some reason a few months ago, I decided to look him up online, to see what this friendly older man in a plaid lumberjack coat was all about. And I say to you now the same thing I said then: Holy Shit.
Dr. Hastings’ career was so outrageously inspiring and self-evidently meaningful that it’s hard to know where to start. For his entire career, he studied the coolest thing on earth: bioluminescence. That’s right, Dr. Hastings spent his life studying the things that the rest of us dreamt about last night: swimming through a glow-in-the-dark ocean, lying in a field of fireflies. The title of his memoir-pape is literally “Fifty Years of Fun.” 

(Photo by Will Ho)
You might imagine that glow-in-the-dark bacteria is some sort of trippy side-show to Mainstream Molecular Biology, but in fact the Hastings Lab’s studies of bioluminescence led to a baffling number of fundamental biological breakthroughs. Most famously, the Hastings Lab was the first to describe quorum sensing (which they called “autoinduction”), the means by which bacteria communicate and interpret their population density. For bioluminescent bacteria, quorum sensing is important because the glow is only visible when bacteria are at present in large numbers, so these microbes need to be able to coordinate their glow-making gene expression based on population density. Thus, the Hastings’ lab gave the first evidence that gene expression could be directly regulated by signals sent by other bacteria. We now know that quorum sensing is widely used by bacteria, and is important for such medically-relevant phenomena as biofilm formation, virulence, and antibiotic resistance.
It is important to realize that Hastings was working at the same time (and place) that the molecular biology revolution was occurring, when many now-famous scientists were figuring out transcription, translation, the genetic code, and gene regulation. But those scientific giants were a real clique, and Hastings was not really on the inside. Jim Watson, who founded the department that Hastings’ worked in, wrote: ”Woody was liked by all…though I saw his science as having little potential to make lasting ripples.” Wally Gilbert, who also worked in the same building as Hastings, ”expressed disbelief or disinterest, or thought [the] findings had a trivial explanation.” Basically, if you weren’t studying E. coli or phage genetics, your work wasn’t seen as relevant. Of course, this arrogant attitude is fucking ridiculous, both then and now. Hastings’ career is proof that it is not true that “model organism” research is the only way to get down to the heart of things (though of COURSE, Sick Papes also worships E. coli and phage, don’t get me wrong - just saying that they are just as “weird” as any other organism.)
And there’s much more. Hastings’ work was also foundational for another entire field: circadian rhythms. He showed that bioluminescent dinoflagellates have an internal clock, and provided one of the first tractable systems for studying the biochemical basis of biological rhythm. And again, this work has proven to be fundamental across much of life, including humans.
Hastings also pioneered studies of many interesting bioluminescent organisms, including the now well-known symbiosis between bioluminescent bacteria and squid, where the squid use the glowing bacteria in their belly to optically match the down-pouring moonlight and thereby make themselves invisible to lurking predators beneath. Yet again, Hastings provided one of the first tractable experimental systems to help found an entire field that might seem at first glance to be a novel sideshow, but has since exploded into an NIH-funded, power-house of mega-important research on the microbiome. Glowing oceans, the rhythm of life, and symbiosis - that’s how you do it. And it is a testament to Hastings’ scientific brilliance that all of these studies of unusual organisms were so solid that they helped to spawn entire fields that are still going strong decades later. 
As we pay tribute to Woody Hastings, I’ll leave you with a quote from “Fifty Years of Fun,” where Hastings’ gives a memorable metaphor for the sometimes unituitive process of evolution:
"[A] story of inanimate evolution, which I picked up at JPL, the Jet Propulsion Lab at Cal Tech… It concerns the origin of the track width of railroads in the U.S., which is exactly 4 feet, 8.5 inches. The first explanation is that English expatriates built the first railroads in the U.S., but where did the English standard come from? Obviously from wagons and carriages, for which the jigs and tools were readily available. And why were they built that width? Because with any other spacing the wagons would break up on some of the old long-distance roads, due to the existing wheel ruts. These can be traced to the Romans, who built and used these roads for their war chariots, designed to be just wide enough to accommodate two war horses, thus accounting for the width. As with living organisms, many present-day features were determined long ago. 
JPL provides another twist to the story. Attached to the space shuttle there are two large booster rockets, made by a factory in Utah and shipped by train to the assembly site. The rockets were built as large as possible, but they had to pass through mountain tunnels, the sizes of which are related to track width. So a major design feature of the world’s most advanced transportation system was determined by the width of a horse’s ass.”


Hastings, J. W. (2001). Fifty years of fun. Journal of Biological Rhythms. (Vol. 16, pp. 5–18). [PDF]

Woody Hastings, who passed away last week at the age of 87, ran his lab down the hall from where I worked as a graduate student. He was already retired by the time I began, but over the years I would occasionally pass him in the hallway, where he still kept an office with his name on the door. I had no idea who he was or what he had studied, but for some reason a few months ago, I decided to look him up online, to see what this friendly older man in a plaid lumberjack coat was all about. And I say to you now the same thing I said then: Holy Shit.

Dr. Hastings’ career was so outrageously inspiring and self-evidently meaningful that it’s hard to know where to start. For his entire career, he studied the coolest thing on earth: bioluminescence. That’s right, Dr. Hastings spent his life studying the things that the rest of us dreamt about last night: swimming through a glow-in-the-dark ocean, lying in a field of fireflies. The title of his memoir-pape is literally “Fifty Years of Fun.” 

(Photo by Will Ho)

You might imagine that glow-in-the-dark bacteria is some sort of trippy side-show to Mainstream Molecular Biology, but in fact the Hastings Lab’s studies of bioluminescence led to a baffling number of fundamental biological breakthroughs. Most famously, the Hastings Lab was the first to describe quorum sensing (which they called “autoinduction”), the means by which bacteria communicate and interpret their population density. For bioluminescent bacteria, quorum sensing is important because the glow is only visible when bacteria are at present in large numbers, so these microbes need to be able to coordinate their glow-making gene expression based on population density. Thus, the Hastings’ lab gave the first evidence that gene expression could be directly regulated by signals sent by other bacteria. We now know that quorum sensing is widely used by bacteria, and is important for such medically-relevant phenomena as biofilm formation, virulence, and antibiotic resistance.

It is important to realize that Hastings was working at the same time (and place) that the molecular biology revolution was occurring, when many now-famous scientists were figuring out transcription, translation, the genetic code, and gene regulation. But those scientific giants were a real clique, and Hastings was not really on the inside. Jim Watson, who founded the department that Hastings’ worked in, wrote: ”Woody was liked by all…though I saw his science as having little potential to make lasting ripples.” Wally Gilbert, who also worked in the same building as Hastings, ”expressed disbelief or disinterest, or thought [the] findings had a trivial explanation.” Basically, if you weren’t studying E. coli or phage genetics, your work wasn’t seen as relevant. Of course, this arrogant attitude is fucking ridiculous, both then and now. Hastings’ career is proof that it is not true that “model organism” research is the only way to get down to the heart of things (though of COURSE, Sick Papes also worships E. coli and phage, don’t get me wrong - just saying that they are just as “weird” as any other organism.)

And there’s much more. Hastings’ work was also foundational for another entire field: circadian rhythms. He showed that bioluminescent dinoflagellates have an internal clock, and provided one of the first tractable systems for studying the biochemical basis of biological rhythm. And again, this work has proven to be fundamental across much of life, including humans.

Hastings also pioneered studies of many interesting bioluminescent organisms, including the now well-known symbiosis between bioluminescent bacteria and squid, where the squid use the glowing bacteria in their belly to optically match the down-pouring moonlight and thereby make themselves invisible to lurking predators beneath. Yet again, Hastings provided one of the first tractable experimental systems to help found an entire field that might seem at first glance to be a novel sideshow, but has since exploded into an NIH-funded, power-house of mega-important research on the microbiome. Glowing oceans, the rhythm of life, and symbiosis - that’s how you do it. And it is a testament to Hastings’ scientific brilliance that all of these studies of unusual organisms were so solid that they helped to spawn entire fields that are still going strong decades later. 

As we pay tribute to Woody Hastings, I’ll leave you with a quote from “Fifty Years of Fun,” where Hastings’ gives a memorable metaphor for the sometimes unituitive process of evolution:

"[A] story of inanimate evolution, which I picked up at JPL, the Jet Propulsion Lab at Cal Tech… It concerns the origin of the track width of railroads in the U.S., which is exactly 4 feet, 8.5 inches. The first explanation is that English expatriates built the first railroads in the U.S., but where did the English standard come from? Obviously from wagons and carriages, for which the jigs and tools were readily available. And why were they built that width? Because with any other spacing the wagons would break up on some of the old long-distance roads, due to the existing wheel ruts. These can be traced to the Romans, who built and used these roads for their war chariots, designed to be just wide enough to accommodate two war horses, thus accounting for the width. As with living organisms, many present-day features were determined long ago. 

JPL provides another twist to the story. Attached to the space shuttle there are two large booster rockets, made by a factory in Utah and shipped by train to the assembly site. The rockets were built as large as possible, but they had to pass through mountain tunnels, the sizes of which are related to track width. So a major design feature of the world’s most advanced transportation system was determined by the width of a horse’s ass.”

Jul 23

Summer is the season of airy sabbaticals, and today Sick Papes is taking a well-deserved break from the sweaty crust of the ivory tower to visit the realm of Cronodon, one of the most intriguing and expansive concavities of the internet. Cronodon is simultaneously a comprehensive scientific resource and a mythical, futuristic universe. The site is packed with detailed, accurate explanations of complex scientific concepts, such as the physics of trees and the metaphors of string theory. The science is balanced by pure imagination: fantasy voyages through the ancient Cromlech and hand-drawn illustrations of chimerical beasts. Cronodon is a useful reference text for biology, physics, and computer science students, and a gallery that exhibits the fantastical imagination of a singular internet artist
Cronodon is curated by an alter ego known only as Bot (whose intentionally vague bio and CV can be found here). Though Bot’s relationship with the earthly domain remains inexplicit, what is clear is that he/she has legitimate academic credentials and an abundance of creative energy. In this extensive interview, we explore the motivations for creating and maintaining Cronodon, and discuss how Bot transitioned from a trajectory of scientific research to being the conservator of an online “museum of the future”.


SP:   Can you describe the genesis of Cronodon? Which came first, the thorough academic treatises of Bio-tech or the whimsical fantasy scenarios of the Dark Side?
BOT:  I can’t actually remember which page was the first I wrote, apart from the default homepage, but when the site was only a few pages it already had elements of both. My first navigation bar contained ’SpaceTech’ and ‘Dark Side’ but no ‘BioTech’, so I guess my initial feelings were to focus on science fiction and the ‘Dark Side’, but that soon changed
When I was very young, well into single figures, I started imagining and drawing alien creatures and already knew that I wanted to be a scientist. Science fiction got me interested in science fact, through programs like Dr Who, Space 1999, Star Trek et al., and also vice versa: reading books about dinosaurs and prehistoric invertebrates got me thinking very early on what other forms of life might be possible. By the age of ten, I was asking my teachers awkward questions like, ‘How do starfish breath?’ However, during later education my creativity was almost destroyed through neglect, as I was so busy studying. Only some years after completing my degrees did it begin to return, inspired by what I had learnt and seen in nature. Some of the drawings on Dark Side are childhood drawings, and so not as artistic as I would like them to be, others are more recent.
I also remember that the graphics came first. As I developed an interesting Pov-Ray graphic I wrote an article related to it. Now it works both ways: I often write the article and then produce the graphics for it, whether in 2D or 3D.
SP:  The Cronodon bio states that, “As a neurobiologist, Bot was privileged to study insects on Earth, looking at their remarkable sensory systems. Bot worked in a top UK university disguised as an Earthling. (That was before much of the funding for such research was withdrawn by the UK Government, which is apparently incapable of valuing anything other than a narrow-minded approach to economics).” Can you elaborate on your departure from Academia? At what point did you decide to dedicate the majority of your effort to Cronodon?
BOT:  I departed full-time academic research about seven years ago, after six years as a post-doctoral research fellow. In short I left professional Academia because I felt it had become too controlled and too commercial. I moved to the US at this time and more-or-less immediately launched Cronodon, when somebody I know advised me too (‘no use hiding your light under a bushel,’ they later reminded me). In the first couple of years I kept my finger in academia by teaming up with academics in the US, though as a very part-time visiting academic rather than as an employee. Nevertheless some very useful research came from this partnership, so I was still publishing peer-reviewed papers even though I was not employed by an academic institution. Thus I left research gradually; indeed I still have one finger in one project as a voluntary contributor. I have also been lecturing on and off since, as a self-employed freelance consultant, and especially over the past year. Apart from the past 12 months, Cronodon has been my main focus for the past seven years. Ideally it would always be my main focus, though I enjoy lecturing too, and I need to do enough to pay the bills, it’s a matter of getting the balance right.
I left full-time research simply because I found the grant-awarding system frustrating; not so much because of the difficulty getting grants, though applying for them is overly-onerous and getting them is something of a political lottery, but because I got fed-up with being told what I could and could not research by grant-awarding bodies with a set agenda. 
The most productive time I had in research was when we had an open grant in the US, but even in the US which traditionally values pure science and academia more than the UK, such grants are few and far between. More often academics have their hands tied by those who pay them. I simply was not happy in such a controlled environment, even though I was very successful as a researcher. Don’t misunderstand me, I researched some interesting problems and did some fun experiments, but I could not go where I wanted: it seemed to be forbidden by the system! I still read copious amounts of research articles, which shows that somewhere somebody is doing the kind of research that might have kept me onboard, but after 10 years in the system I realized that it wasn’t for me.
I needed to explore where my curiosity took me and be creative and to have freedom of thought. I personally could not find these things in a university research setting. As a freelance consultant I get a bigger choice of what to study and teach.

SP:   Much of the scientific material, particularly that relating to biology, is far more exhaustive, accurate, than the average peer-reviewed paper. Did you write these articles off the top of your head, or was there a process of meticulous research and editing?
SP:  Thank you, I do try to be as exhaustive and accurate as I can be in the time I have. In particular I feel that I have to offer something to the reader that other sites and publications do not and my own curiosity is insatiable. I love studying science subjects in breadth and depth and I often write articles to assist my own personal studies. Some articles I wrote more-or-less entirely from memory, although I always check facts I am unsure of against multiple sources if possible. My own notes, often collected over the years from many sources, are also sometimes directly transcribed into prose. (I have shelves full of thousands of pages of notes despite throwing most away to free-up space). I write articles with different target audiences in mind. Generally I try to cater for as wide an audience as possible, and so I often include elementary explanations in otherwise advanced articles. However, some articles are written primarily as summaries of my own extensive literature searches and as these are more likely to be read by academics I take extra care to add a bibliography.
As for editing: this has not been very meticulous and I do sometimes spot typos when referring back to my own work. I still have some articles in need of proof-reading! I do check by reading back paragraphs as I type them, but it is sometimes a while before I sit down and proofread whole passages of text and this usually happens when I need to remind myself of something or when I come to update the article.SP:   Given the un-verifiability of most internet content, are you concerned that Cronodon might not be recognized as a reliable source? Why did you choose to not include comprehensive literature citations?
BOT:  There are several reasons for this. The main aim of Cronodon is to get the information out there, especially as much of the information is not easily accessible to the greater public. Indeed, much of the classical research on invertebrate zoology and botany is being forgotten. Initially the first articles were quite basic and largely written from memory for a general audience. Inline referencing with the Harvard style is time-consuming and to some extent I have to compromise otherwise Cronodon would contain far fewer articles.
As articles have been updated and expanded to include more details, and also as a more academic audience is being drawn to Cronodon, I have begun adding bibliographies, by which I mean a list of sources and suggested further reading, rather than inline references, since this is quicker. As science expands, more and more information becomes textbook knowledge and referencing these basic facts becomes less important, however, a couple of articles, which were written to appeal to academics, do have inline referencing. Time constraints aside, inline referencing also makes text harder to read for a lay or young student audience and I do not want to break up the flow of text in this way. I might consider using a numbering system, but for now I think bibliographies should suffice.
I do believe that credit is where credit is due, and adding bibliographies is an ongoing task. However, Cronodon does not claim any ideas to be its own unless specifically stated – it is important that readers can distinguish between my own expert opinion and what is widely accepted by the wider scientific community. I have no desire to use Cronodon as a platform neither to pass off my own opinions as facts, nor to claim credit for ideas which are not my own. Cronodon respects copyright and where my graphics are inspired by other sources then these sources are credited.
Some people have emailed me asking for references and I have always sent them the references requested. This has also been an increasing incentive to add bibliographies. My main incentive for adding bibliographies is to assist the interested reader with further research, and I don’t think I need to make them exhaustive. People should always consult multiple sources.

SP:   Cronodon is packed with extraordinary 3D models of animals, spaceships, and aliens. How do you build these illustrations?
I am glad you find some of my 3D models interesting, thank you. These models are created using Pov-Ray, which is a free C-style graphical scripting language and ray-tracer which can be downloaded online. In other words, you type a small computer program or script, which includes special graphical commands, such as: sphere { <0,0,0>, 1 scale <1,5,1> pigment { color rgbt <1,0.2,0.6,0.4> } normal { bumps 1 scale 0.2 } } which, when a light and camera are also added, will draw an elongated purplish sphere with a bumpy texture when the image is rendered. 
I generally start with a mental image of what the object looks like and then construct it in Pov-Ray code, using simple shapes like spheres, cylinders, cones and boxes and more complex shapes like sphere sweeps (which are good for tentacles and other organic shapes), blobs and mathematical formulae. Additionally, there is a powerful technique called constructive solid geometry. This allows these elementary shapes to be joined and merged in various ways, or even subtracted from one-another.
This can require a lot of mental arithmetic, though with practice these calculations become more intuitive, almost sub-conscious. Just like when we catch a ball we don’t consciously compute the necessary vectors, so with practice I find I can position the elements of an object correctly in 3D space without performing any rigorous computations. Some of my earlier graphics were a bit inaccurate though! I sometimes also resort to the ‘Hollywood effect’ (a technique I learnt whilst on a tour around Universal Studios in LA) – making sure the graphic looks right even if it isn’t! However, these days I tend to produce more accurate 3D representations which can be viewed from any angle and still look correct, but that is more time-consuming and became easier with practice.
I prefer this code-up approach rather than the more artistic approach of manipulating meshes like virtual putty, which many ‘professional’ 3D graphics programs do. Even though the latter can potentially generate more professional-looking graphics, I just find them awkward to use, though I may sit-down and learn to use them properly one day. I am not one of these people who puts in weeks of time to a single graphic, trying to make it look real, like a photograph. The biological graphics I try to make somewhat realistic but also clear, to maximize their educational value. The alien graphics I like to be aesthetic: I prefer the otherworldy look of unreality that computer graphics can create more so than photographic accuracy. Of course, the film industry has more powerful tools: they can scan 3D objects to generate 3D shapes from triangular meshes and then alter them. I do not have that luxury.
Pov-Ray also has third-party add-on tools, like Tom Aust’s ‘Tomtree’, which the author has kindly provided for general use, which enables one to construct realistic-looking trees. This is quite difficult to use at first, as you have to set a long list of parameters which alter things like bark texture, branch angle and the number of kinks in the trunk, etc. Tomtree seems to be underused by people, but with patience and the time taken to understand what the parameters do, and with a good background in observing trees in nature, good results are possible. The Pov-Ray community provides many useful tools, tips and tricks. Peter Houston’s ‘blobman’ is another one I have found useful.
There is often a degree of trial and error in Pov-Ray, especially when randomization is used to create more natural graphics and this is where the artist comes in: deciding what looks aesthetic and tweaking the parameters to bias the ‘randomness’ in the desired direction. Patience is often required: the most complex graphics I have produced took all night to render, though most take seconds or minutes. A graphic may have to be rendered and tweaked many times before it is finished.SP:   What advice would you have for young scientists hoping to pursue, as you say in your mission statement, “science for science sake”?
BOT:  Too many people today are automatically ‘programmed’ to think the only science worth doing is that which society tells them is worth doing. Take cell biology, for example. The primary goal ought to be to understand how cells work. This is interesting in itself, and would also have countless spin-offs for medicine and engineering. Ideally we would let those who just want to study cells, do so, and those that want to use this information to develop treatments for cancer to do so too. In reality, in order to attract funding everyone seems to claim their research is just about cancer, or similar matters of interest to medicine (i.e. to the pharmaceutical industry). Too many projects have to justify themselves in this incorrect manner. This is not ‘science for science sake’. Science needs no justification. Science should not be subservient to technological industries, which have their own proper place. This increasing commandeering of science by the corporate world has, I think, also led to an exponential increase in public mistrust of science.
To do ‘science for science sake’, I would say, above all else, follow your heart and your true intellectual curiosity. Put science first, even before your own career and political standing. If you do science for any other reason then you are not doing ‘science for science sake’. If your key interest happens to be fashionable with funding bodies, then you are fortunate, if not then leave university research as I did.  Consider working for a museum. Everything that exists is worthy of study. In my opinion a true scientist puts science first. The key is to always keep an open mind and follow your own inner curiosity. Be intrinsically motivated and strive to self-actualize. Take inspiration from Mother Nature.
You may also have to: ‘Give to Caesar what is Caesar’s’! By this I mean you may have to endure financial hardship, particularly if you find the system will not let you follow your curiosity. At least this is my experience. It is a worthy trade to observe the wonders of Nature. Like the sages of old you may find yourself an ‘eremite’ wandering from abode to abode, accumulating little of financial worth, but you will learn much and see many wonders. The most productive years of my life I was without full-time employment. I was lucky that I had people to help support me. If I had remained in full-time employment, then Cronodon would not exist! Even though it is possible to pursue hobbies whilst fully employed, I needed a long span of time to contemplate, do my own research and tap my inner psyche. Science, for me, is a spiritual and intellectual quest. Working for money, or any other mundane reason, makes it hard to find one’s true inner scientist! I have simply returned to what I enjoyed so much as a child: exploring science and the wonders of Nature and imagining life on alien worlds! I am never more myself and never more fulfilled than when I follow my own scientific and artistic interests and when I work on Cronodon.SP:  What are your future plans for Cronodon? Are you hoping to expand content in specific areas, or alter the scope of the site?
BOT: At the moment, due to time constraints, much of my work on the site has been in improving existing content, and adding the occasional new article. I intend to fill a few key gaps. There are some major groups of invertebrates I have yet to write about. I want to add more articles on quantum physics (the ones I have added have been quite popular) as there are some interesting key topics I still want to discuss. Writing these articles also helps clarify my own understanding, so I shall write new articles on whatever topics grab my interest. Some of the older articles still need reworking, and bibliographies are waiting to be added where applicable.
Up until a year ago, when I had more spare time, I was developing my Plutonium project in which the reader can interactively explore a region of space in a virtual spaceship. This project combines science fiction with science fact. It was nearing a functional level of completion but has been left hanging for the past year. There are some scripted worlds to visit and some hidden aliens to discover (aliens and worlds which not even the Google search engines have discovered so far as they are on ‘hidden’ pages the reader has to find by exploring space) and I hope to add many more soon, when time permits. This project combines two of my favorite areas of science: biology and astrophysics, with science fiction, computer programming and computer art and so seems to be the ultimate synthesis of where Cronodon has been going over the years. 
Biology will likely remain the main emphasis of Cronodon, as it is my main discipline. However, I do enjoy writing physics essays, but I see no point, and indeed have no time, to write exhaustive articles on the topic complete with mathematical proofs, when so many textbooks already do this. Instead I shall target key areas that grab my interest, especially when I think I can summarize highly technical concepts and introduce them to a wider audience or provide more complete explanations. Although a trained mathematician, I usually avoid mathematical articles, though I have written some, as this expands the scope a bit too far for the time I have. However, physics textbooks often omit many steps from key proofs, expecting the reader to fill in the gaps. I may write some key articles that may be of interest to students with fuller proofs and explanations in these areas. I am planning some more mathematics and computing projects, for my own amusement, and so some articles on these topics are likely to appear. 
In short, where I expand will depend where my interest takes me and what time I have available. At the same time I want to keep the current general flavour of Cronodon and not all my projects are likely to appear online. Cronodon displays only a small and selective sample of my projects and investigations into science. In the end time defeats us all, and I would really love to find a like-minded person to work with and maybe carry it on from me one day.
SP: Given the remarkable effort you put into the site, and the high quality of content you produce, do you feel that Cronodon receives the recognition it deserves?
BOT: Thank you for the positive feedback! Recognition and publicity are always welcome if they help more people find Crondon and benefit from it. Whatever recognition Cronodon deserves is not for me to judge, but individuals do send me feedback occasionally, generally positive, and some people have obtained permission to use some of my graphics in their publications. Sometimes people contact me to thank me for providing information they found hard to find elsewhere. I would like more people to find the site and find something of value in it. People do sometimes contact me to say how glad they were to find my site, but also how hard it was to find! Many of the graphics do appear in Google images given the right search phrases.
It certainly has been a lot of effort, though I could do so much more if I had the time, but I still have to work for a living, albeit part-time to cater for my somewhat ‘minimalistic’ lifestyle.
I do recall the CEO of Google saying how he would like to see more people combine science with art for educational purposes and to inspire people to become more interested in science. I feel that I am trying to do just that. Maybe I should contact Google and see if there is anymore they can do to help promote Cronodon, apart from compiling links with their search bots. I simply don’t have the money to pay for advertising. Still, I like to think that the curious will find my site eventually.
Those who have borrowed and acknowledged my images in their publications have brought a small amount of recognition. However, I don’t honestly expect Cronodon to get much recognition, whether it deserves it or not. Most people simply don’t have time for ‘science for science sake’. This negative attitude to pure science has seemingly increased over the years, driven I think by governments and media who want to control science for practical and economic gain and who see no value, for example, in knowing what lives in the sea. However, there are still many people who are genuinely interested in the world around them and if they find Cronodon useful, then that is enough. I hope that Cronodon can promote ‘science for science sake’ in a world which is increasingly ignorant of it. Whatever happens, I will continue to put a lot of effort into Cronodon simply because I enjoy it and I know that some other people do to.

Summer is the season of airy sabbaticals, and today Sick Papes is taking a well-deserved break from the sweaty crust of the ivory tower to visit the realm of Cronodon, one of the most intriguing and expansive concavities of the internet. Cronodon is simultaneously a comprehensive scientific resource and a mythical, futuristic universe. The site is packed with detailed, accurate explanations of complex scientific concepts, such as the physics of trees and the metaphors of string theory. The science is balanced by pure imagination: fantasy voyages through the ancient Cromlech and hand-drawn illustrations of chimerical beasts. Cronodon is a useful reference text for biology, physics, and computer science students, and a gallery that exhibits the fantastical imagination of a singular internet artist

Cronodon is curated by an alter ego known only as Bot (whose intentionally vague bio and CV can be found here). Though Bot’s relationship with the earthly domain remains inexplicit, what is clear is that he/she has legitimate academic credentials and an abundance of creative energy. In this extensive interview, we explore the motivations for creating and maintaining Cronodon, and discuss how Bot transitioned from a trajectory of scientific research to being the conservator of an online “museum of the future”.

SP:   Can you describe the genesis of Cronodon? Which came first, the thorough academic treatises of Bio-tech or the whimsical fantasy scenarios of the Dark Side?

BOT:  I can’t actually remember which page was the first I wrote, apart from the default homepage, but when the site was only a few pages it already had elements of both. My first navigation bar contained ’SpaceTech’ and ‘Dark Side’ but no ‘BioTech’, so I guess my initial feelings were to focus on science fiction and the ‘Dark Side’, but that soon changed

When I was very young, well into single figures, I started imagining and drawing alien creatures and already knew that I wanted to be a scientist. Science fiction got me interested in science fact, through programs like Dr Who, Space 1999, Star Trek et al., and also vice versa: reading books about dinosaurs and prehistoric invertebrates got me thinking very early on what other forms of life might be possible. By the age of ten, I was asking my teachers awkward questions like, ‘How do starfish breath?’ However, during later education my creativity was almost destroyed through neglect, as I was so busy studying. Only some years after completing my degrees did it begin to return, inspired by what I had learnt and seen in nature. Some of the drawings on Dark Side are childhood drawings, and so not as artistic as I would like them to be, others are more recent.

I also remember that the graphics came first. As I developed an interesting Pov-Ray graphic I wrote an article related to it. Now it works both ways: I often write the article and then produce the graphics for it, whether in 2D or 3D.

SP:  The Cronodon bio states that, “As a neurobiologist, Bot was privileged to study insects on Earth, looking at their remarkable sensory systems. Bot worked in a top UK university disguised as an Earthling. (That was before much of the funding for such research was withdrawn by the UK Government, which is apparently incapable of valuing anything other than a narrow-minded approach to economics).”
Can you elaborate on your departure from Academia? At what point did you decide to dedicate the majority of your effort to Cronodon?

BOT:  I departed full-time academic research about seven years ago, after six years as a post-doctoral research fellow. In short I left professional Academia because I felt it had become too controlled and too commercial. I moved to the US at this time and more-or-less immediately launched Cronodon, when somebody I know advised me too (‘no use hiding your light under a bushel,’ they later reminded me). In the first couple of years I kept my finger in academia by teaming up with academics in the US, though as a very part-time visiting academic rather than as an employee. Nevertheless some very useful research came from this partnership, so I was still publishing peer-reviewed papers even though I was not employed by an academic institution. Thus I left research gradually; indeed I still have one finger in one project as a voluntary contributor. I have also been lecturing on and off since, as a self-employed freelance consultant, and especially over the past year. Apart from the past 12 months, Cronodon has been my main focus for the past seven years. Ideally it would always be my main focus, though I enjoy lecturing too, and I need to do enough to pay the bills, it’s a matter of getting the balance right.

I left full-time research simply because I found the grant-awarding system frustrating; not so much because of the difficulty getting grants, though applying for them is overly-onerous and getting them is something of a political lottery, but because I got fed-up with being told what I could and could not research by grant-awarding bodies with a set agenda.

The most productive time I had in research was when we had an open grant in the US, but even in the US which traditionally values pure science and academia more than the UK, such grants are few and far between. More often academics have their hands tied by those who pay them. I simply was not happy in such a controlled environment, even though I was very successful as a researcher. Don’t misunderstand me, I researched some interesting problems and did some fun experiments, but I could not go where I wanted: it seemed to be forbidden by the system! I still read copious amounts of research articles, which shows that somewhere somebody is doing the kind of research that might have kept me onboard, but after 10 years in the system I realized that it wasn’t for me.

I needed to explore where my curiosity took me and be creative and to have freedom of thought. I personally could not find these things in a university research setting. As a freelance consultant I get a bigger choice of what to study and teach.

SP:   Much of the scientific material, particularly that relating to biology, is far more exhaustive, accurate, than the average peer-reviewed paper. Did you write these articles off the top of your head, or was there a process of meticulous research and editing?

SP:  Thank you, I do try to be as exhaustive and accurate as I can be in the time I have. In particular I feel that I have to offer something to the reader that other sites and publications do not and my own curiosity is insatiable. I love studying science subjects in breadth and depth and I often write articles to assist my own personal studies. Some articles I wrote more-or-less entirely from memory, although I always check facts I am unsure of against multiple sources if possible. My own notes, often collected over the years from many sources, are also sometimes directly transcribed into prose. (I have shelves full of thousands of pages of notes despite throwing most away to free-up space). I write articles with different target audiences in mind. Generally I try to cater for as wide an audience as possible, and so I often include elementary explanations in otherwise advanced articles. However, some articles are written primarily as summaries of my own extensive literature searches and as these are more likely to be read by academics I take extra care to add a bibliography.

As for editing: this has not been very meticulous and I do sometimes spot typos when referring back to my own work. I still have some articles in need of proof-reading! I do check by reading back paragraphs as I type them, but it is sometimes a while before I sit down and proofread whole passages of text and this usually happens when I need to remind myself of something or when I come to update the article.

SP:   Given the un-verifiability of most internet content, are you concerned that Cronodon might not be recognized as a reliable source? Why did you choose to not include comprehensive literature citations?

BOT:  There are several reasons for this. The main aim of Cronodon is to get the information out there, especially as much of the information is not easily accessible to the greater public. Indeed, much of the classical research on invertebrate zoology and botany is being forgotten. Initially the first articles were quite basic and largely written from memory for a general audience. Inline referencing with the Harvard style is time-consuming and to some extent I have to compromise otherwise Cronodon would contain far fewer articles.

As articles have been updated and expanded to include more details, and also as a more academic audience is being drawn to Cronodon, I have begun adding bibliographies, by which I mean a list of sources and suggested further reading, rather than inline references, since this is quicker. As science expands, more and more information becomes textbook knowledge and referencing these basic facts becomes less important, however, a couple of articles, which were written to appeal to academics, do have inline referencing. Time constraints aside, inline referencing also makes text harder to read for a lay or young student audience and I do not want to break up the flow of text in this way. I might consider using a numbering system, but for now I think bibliographies should suffice.

I do believe that credit is where credit is due, and adding bibliographies is an ongoing task. However, Cronodon does not claim any ideas to be its own unless specifically stated – it is important that readers can distinguish between my own expert opinion and what is widely accepted by the wider scientific community. I have no desire to use Cronodon as a platform neither to pass off my own opinions as facts, nor to claim credit for ideas which are not my own. Cronodon respects copyright and where my graphics are inspired by other sources then these sources are credited.

Some people have emailed me asking for references and I have always sent them the references requested. This has also been an increasing incentive to add bibliographies. My main incentive for adding bibliographies is to assist the interested reader with further research, and I don’t think I need to make them exhaustive. People should always consult multiple sources.

SP:   Cronodon is packed with extraordinary 3D models of animals, spaceships, and aliens. How do you build these illustrations?

I am glad you find some of my 3D models interesting, thank you. These models are created using Pov-Ray, which is a free C-style graphical scripting language and ray-tracer which can be downloaded online. In other words, you type a small computer program or script, which includes special graphical commands, such as: sphere { <0,0,0>, 1 scale <1,5,1> pigment { color rgbt <1,0.2,0.6,0.4> } normal { bumps 1 scale 0.2 } } which, when a light and camera are also added, will draw an elongated purplish sphere with a bumpy texture when the image is rendered.

I generally start with a mental image of what the object looks like and then construct it in Pov-Ray code, using simple shapes like spheres, cylinders, cones and boxes and more complex shapes like sphere sweeps (which are good for tentacles and other organic shapes), blobs and mathematical formulae. Additionally, there is a powerful technique called constructive solid geometry. This allows these elementary shapes to be joined and merged in various ways, or even subtracted from one-another.

This can require a lot of mental arithmetic, though with practice these calculations become more intuitive, almost sub-conscious. Just like when we catch a ball we don’t consciously compute the necessary vectors, so with practice I find I can position the elements of an object correctly in 3D space without performing any rigorous computations. Some of my earlier graphics were a bit inaccurate though! I sometimes also resort to the ‘Hollywood effect’ (a technique I learnt whilst on a tour around Universal Studios in LA) – making sure the graphic looks right even if it isn’t! However, these days I tend to produce more accurate 3D representations which can be viewed from any angle and still look correct, but that is more time-consuming and became easier with practice.

I prefer this code-up approach rather than the more artistic approach of manipulating meshes like virtual putty, which many ‘professional’ 3D graphics programs do. Even though the latter can potentially generate more professional-looking graphics, I just find them awkward to use, though I may sit-down and learn to use them properly one day. I am not one of these people who puts in weeks of time to a single graphic, trying to make it look real, like a photograph. The biological graphics I try to make somewhat realistic but also clear, to maximize their educational value. The alien graphics I like to be aesthetic: I prefer the otherworldy look of unreality that computer graphics can create more so than photographic accuracy. Of course, the film industry has more powerful tools: they can scan 3D objects to generate 3D shapes from triangular meshes and then alter them. I do not have that luxury.

Pov-Ray also has third-party add-on tools, like Tom Aust’s ‘Tomtree’, which the author has kindly provided for general use, which enables one to construct realistic-looking trees. This is quite difficult to use at first, as you have to set a long list of parameters which alter things like bark texture, branch angle and the number of kinks in the trunk, etc. Tomtree seems to be underused by people, but with patience and the time taken to understand what the parameters do, and with a good background in observing trees in nature, good results are possible. The Pov-Ray community provides many useful tools, tips and tricks. Peter Houston’s ‘blobman’ is another one I have found useful.

There is often a degree of trial and error in Pov-Ray, especially when randomization is used to create more natural graphics and this is where the artist comes in: deciding what looks aesthetic and tweaking the parameters to bias the ‘randomness’ in the desired direction. Patience is often required: the most complex graphics I have produced took all night to render, though most take seconds or minutes. A graphic may have to be rendered and tweaked many times before it is finished.

SP:   What advice would you have for young scientists hoping to pursue, as you say in your mission statement, “science for science sake”?

BOT:  Too many people today are automatically ‘programmed’ to think the only science worth doing is that which society tells them is worth doing. Take cell biology, for example. The primary goal ought to be to understand how cells work. This is interesting in itself, and would also have countless spin-offs for medicine and engineering. Ideally we would let those who just want to study cells, do so, and those that want to use this information to develop treatments for cancer to do so too. In reality, in order to attract funding everyone seems to claim their research is just about cancer, or similar matters of interest to medicine (i.e. to the pharmaceutical industry). Too many projects have to justify themselves in this incorrect manner. This is not ‘science for science sake’. Science needs no justification. Science should not be subservient to technological industries, which have their own proper place. This increasing commandeering of science by the corporate world has, I think, also led to an exponential increase in public mistrust of science.

To do ‘science for science sake’, I would say, above all else, follow your heart and your true intellectual curiosity. Put science first, even before your own career and political standing. If you do science for any other reason then you are not doing ‘science for science sake’. If your key interest happens to be fashionable with funding bodies, then you are fortunate, if not then leave university research as I did.  Consider working for a museum. Everything that exists is worthy of study. In my opinion a true scientist puts science first. The key is to always keep an open mind and follow your own inner curiosity. Be intrinsically motivated and strive to self-actualize. Take inspiration from Mother Nature.

You may also have to: ‘Give to Caesar what is Caesar’s’! By this I mean you may have to endure financial hardship, particularly if you find the system will not let you follow your curiosity. At least this is my experience. It is a worthy trade to observe the wonders of Nature. Like the sages of old you may find yourself an ‘eremite’ wandering from abode to abode, accumulating little of financial worth, but you will learn much and see many wonders. The most productive years of my life I was without full-time employment. I was lucky that I had people to help support me. If I had remained in full-time employment, then Cronodon would not exist! Even though it is possible to pursue hobbies whilst fully employed, I needed a long span of time to contemplate, do my own research and tap my inner psyche. Science, for me, is a spiritual and intellectual quest. Working for money, or any other mundane reason, makes it hard to find one’s true inner scientist! I have simply returned to what I enjoyed so much as a child: exploring science and the wonders of Nature and imagining life on alien worlds! I am never more myself and never more fulfilled than when I follow my own scientific and artistic interests and when I work on Cronodon.

SP:  What are your future plans for Cronodon? Are you hoping to expand content in specific areas, or alter the scope of the site?

BOT: At the moment, due to time constraints, much of my work on the site has been in improving existing content, and adding the occasional new article. I intend to fill a few key gaps. There are some major groups of invertebrates I have yet to write about. I want to add more articles on quantum physics (the ones I have added have been quite popular) as there are some interesting key topics I still want to discuss. Writing these articles also helps clarify my own understanding, so I shall write new articles on whatever topics grab my interest. Some of the older articles still need reworking, and bibliographies are waiting to be added where applicable.

Up until a year ago, when I had more spare time, I was developing my Plutonium project in which the reader can interactively explore a region of space in a virtual spaceship. This project combines science fiction with science fact. It was nearing a functional level of completion but has been left hanging for the past year. There are some scripted worlds to visit and some hidden aliens to discover (aliens and worlds which not even the Google search engines have discovered so far as they are on ‘hidden’ pages the reader has to find by exploring space) and I hope to add many more soon, when time permits. This project combines two of my favorite areas of science: biology and astrophysics, with science fiction, computer programming and computer art and so seems to be the ultimate synthesis of where Cronodon has been going over the years.

Biology will likely remain the main emphasis of Cronodon, as it is my main discipline. However, I do enjoy writing physics essays, but I see no point, and indeed have no time, to write exhaustive articles on the topic complete with mathematical proofs, when so many textbooks already do this. Instead I shall target key areas that grab my interest, especially when I think I can summarize highly technical concepts and introduce them to a wider audience or provide more complete explanations. Although a trained mathematician, I usually avoid mathematical articles, though I have written some, as this expands the scope a bit too far for the time I have. However, physics textbooks often omit many steps from key proofs, expecting the reader to fill in the gaps. I may write some key articles that may be of interest to students with fuller proofs and explanations in these areas. I am planning some more mathematics and computing projects, for my own amusement, and so some articles on these topics are likely to appear.

In short, where I expand will depend where my interest takes me and what time I have available. At the same time I want to keep the current general flavour of Cronodon and not all my projects are likely to appear online. Cronodon displays only a small and selective sample of my projects and investigations into science. In the end time defeats us all, and I would really love to find a like-minded person to work with and maybe carry it on from me one day.

SP: Given the remarkable effort you put into the site, and the high quality of content you produce, do you feel that Cronodon receives the recognition it deserves?

BOT: Thank you for the positive feedback! Recognition and publicity are always welcome if they help more people find Crondon and benefit from it. Whatever recognition Cronodon deserves is not for me to judge, but individuals do send me feedback occasionally, generally positive, and some people have obtained permission to use some of my graphics in their publications. Sometimes people contact me to thank me for providing information they found hard to find elsewhere. I would like more people to find the site and find something of value in it. People do sometimes contact me to say how glad they were to find my site, but also how hard it was to find! Many of the graphics do appear in Google images given the right search phrases.

It certainly has been a lot of effort, though I could do so much more if I had the time, but I still have to work for a living, albeit part-time to cater for my somewhat ‘minimalistic’ lifestyle.

I do recall the CEO of Google saying how he would like to see more people combine science with art for educational purposes and to inspire people to become more interested in science. I feel that I am trying to do just that. Maybe I should contact Google and see if there is anymore they can do to help promote Cronodon, apart from compiling links with their search bots. I simply don’t have the money to pay for advertising. Still, I like to think that the curious will find my site eventually.

Those who have borrowed and acknowledged my images in their publications have brought a small amount of recognition. However, I don’t honestly expect Cronodon to get much recognition, whether it deserves it or not. Most people simply don’t have time for ‘science for science sake’. This negative attitude to pure science has seemingly increased over the years, driven I think by governments and media who want to control science for practical and economic gain and who see no value, for example, in knowing what lives in the sea. However, there are still many people who are genuinely interested in the world around them and if they find Cronodon useful, then that is enough. I hope that Cronodon can promote ‘science for science sake’ in a world which is increasingly ignorant of it. Whatever happens, I will continue to put a lot of effort into Cronodon simply because I enjoy it and I know that some other people do to.

Jun 26

Xiol, J., Pietro Spinelli, Laussmann, M.A., Homolka, D., Yang, Z., Cora, E., Couté, Y., Conn, S., Kadlec, J., Sachidanandam, R., Kaksonen, M., Cusack, S., Ephrussi, A., Pillai, R.S., 2014. RNA Clamping by Vasa Assembles a piRNA Amplifier Complex on Transposon Transcripts. Cell 1–20. doi:10.1016/j.cell.2014.05.018
A truly sick pape, like any great work of art, weaves together the threads of many diverse historical influences into an intricate - and preferably blacklight sensitive - tapestry. Thus, a true artist reveals a web of connections between his or her historical predecessors, connections that may seem obvious in hindsight, but which would in fact never have existed without the artist’s work. For example, consider two of the major literary works that have influenced us here at Sick Papes™: How To Bag The Biggest Buck of Your Life by Larry Benoit, and Untruths About Animals by Orville Lindquist. Without our important artistic output, the now-obvious spiritual kinship between these two masterpieces would have been lost on a generation of biologists. This is undoubtedly what Borges meant when he wrote that “every writer creates his own precursors.”
Today’s Pape weaves together two particularly deep and powerful historical threads. The first is a 16th century Swedish King; the second is the epic world-historical tale of the silk industry. When these two stories come together on the loom of cell culture and in vivo validation, you’ve got yourself a fucking insane tapestry. 
King Gustav Vasa (1496-1560) was the patriarch the Vasa noble family of Sweden, which ruled until 1654, when the family died off with no heirs. This lack of heirs is the reason why Vasa is also the namesake of the vasa gene: when the first group of sterility genes were identified in Drosophila (i.e. those mutations that cause sterility), these genes were all named after extinct European royal families that had left no heir: Vasa, Tudor, Valois, Staufen. Vasa is a molecular marker for germ cells of nearly all animals, which makes it a very fascinating and widely studied gene. Infuriatingly, however, we still don’t have a clear understanding of exactly why this gene is necessary for producing heirs.
Keep that in mind as you allow your mind to drift to China in 3,000-5,000 B.C., when people first domesticated the silkmoth. For the following 7,000 years, we have been going nucking futs for silk, and have continued to pour unimaginable resources into understanding the biology of this wildass insect. Whereas Drosophila has flourished as a model organism because of its relatively simple genetics and rapid reproduction, the silkworm Bombyx mori has none of these advantages (i.e. it’s got 28 pairs of chromosomes and is wicked friggin’ labor intensive to keep). But our insatiable need to be clothed in silk at all times has kept this bugger well-studied for thousands of years.
One of the completely unpredictable outcomes of all the silkworm research has been the discovery of an immortalized cell line where a very specific biological pathway is active: the Piwi pathway. (The Piwi gene, like Vasa, is necessary for ferility, but it’s namesake is not European royalty but rather an acronym for “P-element induced wimpy testis”). The Piwi pathway, which we’ve spoken about before, is a recently discovered pathway that utilizes small RNA molecules (“piRNAs”) to protect the genome against transposons and other pieces of destructive jumping DNA. The silkworm-derived cell line is apparently the only cell line currently known where the Piwi pathway is active, and therefore provides one of the most biochemically tractable opportunities to pick apart this freaky pathway.
These data monsters use the silkworm cell line to uncover a direct physical connection between Vasa and the Piwi pathway: it turns out that Vasa protein molecules are the physical location where the small piRNA precursors are exchanged between members of the Piwi pathway, allowing them to serve as templates to destroy transposons. Thus, it turns out after all these years that Vasa is essentially part of the Piwi pathway, and that one of the major functions of this mysterious genes is to protect the genomic damage caused by transposons, which is especially critical in germ cells and other types of stem cells, where genomic injury can have particularly gnarly consequences. And thus, the true purpose of the past 7,000 years of silk cultivation, European monarchy, and Cell Press has finally been revealed, and for this we celebrate.

Xiol, J., Pietro Spinelli, Laussmann, M.A., Homolka, D., Yang, Z., Cora, E., Couté, Y., Conn, S., Kadlec, J., Sachidanandam, R., Kaksonen, M., Cusack, S., Ephrussi, A., Pillai, R.S., 2014. RNA Clamping by Vasa Assembles a piRNA Amplifier Complex on Transposon Transcripts. Cell 1–20. doi:10.1016/j.cell.2014.05.018

A truly sick pape, like any great work of art, weaves together the threads of many diverse historical influences into an intricate - and preferably blacklight sensitive - tapestry. Thus, a true artist reveals a web of connections between his or her historical predecessors, connections that may seem obvious in hindsight, but which would in fact never have existed without the artist’s work. For example, consider two of the major literary works that have influenced us here at Sick Papes™: How To Bag The Biggest Buck of Your Life by Larry Benoit, and Untruths About Animals by Orville Lindquist. Without our important artistic output, the now-obvious spiritual kinship between these two masterpieces would have been lost on a generation of biologists. This is undoubtedly what Borges meant when he wrote that “every writer creates his own precursors.”

Today’s Pape weaves together two particularly deep and powerful historical threads. The first is a 16th century Swedish King; the second is the epic world-historical tale of the silk industry. When these two stories come together on the loom of cell culture and in vivo validation, you’ve got yourself a fucking insane tapestry. 

King Gustav Vasa (1496-1560) was the patriarch the Vasa noble family of Sweden, which ruled until 1654, when the family died off with no heirs. This lack of heirs is the reason why Vasa is also the namesake of the vasa gene: when the first group of sterility genes were identified in Drosophila (i.e. those mutations that cause sterility), these genes were all named after extinct European royal families that had left no heir: Vasa, Tudor, Valois, Staufen. Vasa is a molecular marker for germ cells of nearly all animals, which makes it a very fascinating and widely studied gene. Infuriatingly, however, we still don’t have a clear understanding of exactly why this gene is necessary for producing heirs.

Keep that in mind as you allow your mind to drift to China in 3,000-5,000 B.C., when people first domesticated the silkmoth. For the following 7,000 years, we have been going nucking futs for silk, and have continued to pour unimaginable resources into understanding the biology of this wildass insect. Whereas Drosophila has flourished as a model organism because of its relatively simple genetics and rapid reproduction, the silkworm Bombyx mori has none of these advantages (i.e. it’s got 28 pairs of chromosomes and is wicked friggin’ labor intensive to keep). But our insatiable need to be clothed in silk at all times has kept this bugger well-studied for thousands of years.

One of the completely unpredictable outcomes of all the silkworm research has been the discovery of an immortalized cell line where a very specific biological pathway is active: the Piwi pathway. (The Piwi gene, like Vasa, is necessary for ferility, but it’s namesake is not European royalty but rather an acronym for “P-element induced wimpy testis”). The Piwi pathway, which we’ve spoken about before, is a recently discovered pathway that utilizes small RNA molecules (“piRNAs”) to protect the genome against transposons and other pieces of destructive jumping DNA. The silkworm-derived cell line is apparently the only cell line currently known where the Piwi pathway is active, and therefore provides one of the most biochemically tractable opportunities to pick apart this freaky pathway.

These data monsters use the silkworm cell line to uncover a direct physical connection between Vasa and the Piwi pathway: it turns out that Vasa protein molecules are the physical location where the small piRNA precursors are exchanged between members of the Piwi pathway, allowing them to serve as templates to destroy transposons. Thus, it turns out after all these years that Vasa is essentially part of the Piwi pathway, and that one of the major functions of this mysterious genes is to protect the genomic damage caused by transposons, which is especially critical in germ cells and other types of stem cells, where genomic injury can have particularly gnarly consequences. And thus, the true purpose of the past 7,000 years of silk cultivation, European monarchy, and Cell Press has finally been revealed, and for this we celebrate.

Jun 01

image

Piketty, Thomas. Capital in the Twenty-First Century, Cambridge, MA: Harvard University Press, 2014.

We live in the so-called “information age.”  Mid-way through my recent hippie wedding to my brilliant and beautiful hippie wife, the cold hard facts of the times in which we live broke through the fuzzy math of our timeless love. The scents of lavender and rosemary wafted through the wooden ceilings and floors of the quaint nondemoninationalinterfaithseriouslyanyidentityiscool chapel. A founding editor of Sick Papes — z/s/he who must not be named — arose to give a rousing, heartfelt testimonial of our incontrovertible goodness-of-fit. I was moved to tears. Then, our fearless editor let drop some critical supplemental material regarding the woman who was moments away from becoming my betrothed. “But there’s one thing you should just accept right now. Your wife,” z/s/he deadpanned. “She is a data monster.”

Now my liberal arts hippie education had taught me that the roots of our modern world lie in white supremacy, patriarchy, and the cultural hegemony of the West. But a new life of knowledge lay ahead of me. From now on, to paraphrase a t-shirt I once bought at South New Jersey’s Cowtown Fleamarket, if you ain’t a data fan, you ain’t shit.

Luckily, my wedding coincided with the release of a sick book about a data-driven approach that I can get behind. In his new book, Capital in the Twenty-First Century, Francophone Thomas Piketty dances the gavotte with a trove of data so sick, so monstrous, so exhaustive and so beautiful that even the usually irascible Paul Krugman had to just be like, ‘Imma step back and let you finish.’ As though forged from the consummation of all my intellectual and spiritual obsessions, the themes of the book and its reception also bear more than a passing resemblance to rock and roll heroes, anti-heroes and feuds of the twentieth century. This is an issue that I will get to in a moment.

But first, here’s one way to describe the basic argument. Essentially, insatiable data séducteur Piketty marshals centuries of wealth and income records from France, Britain, the United States, and occasionally a few other Northern European countries, to demonstrate some disturbing tendencies of capitalism. In particular, his data show that inequality has grown significantly since the industrial revolution in the West due to the observed reality that the rate of return to wealth has generally been much greater than the rate of growth in an economy.

There are actually a lot of arguments in this book. Coming in at over 600 pages, one might suspect Piketty is afflicted by logorrhea. Or maybe he’s trying to overwhelm his critics with the sheer volume of his words/figures/publicly accessible excel spreadsheets. But you know what? Sometimes that’s just how a data monster rolls. So here’s the summary slide in layman’s terms: if you own capital (things like real estate, financial stock, and industrial equipment) then you will get much richer, much quicker than if you rely on the income you expect from being a working man/woman/Z. This is because the growth of jobs is, in the bigger scheme of things, highly dependent on the growth of the economy as a whole, while growth of capital is dependent on other things such as accessing returns to production using existing capital, which can be relatively or even totally independent of labor.

As a side note, one of the twists on this argument, especially in the United States, is that there is also a major divergence in income inequality. Especially with respect to the top 1% and even the top .01% of earners, this is because of the emergence of a new class of so-called “supermanagers.” This term can mean one of two things. Either A) there is this new class that is just so fucking good at raking in cash for their companies that there is no way they could not be deserving of astronomical salaries and bonuses, hundreds of times the earnings of the average employee in a given firm. Or B) this group of “supermanagers” is actually a group of “super-board-packers,” and ensure that the people who decide to hire them, give them raises and bonuses, are their fuck buddies who get titillated by — or are willing to pretend to get titillated by — all the nasty butt speak in option A.

Anyway, these findings are all well and good, but Sick Papes is first and foremost a highly relevant cultural blog as I had understood it. So I’ll leave the economic crap for now. Instead, it has come the time for me to reprise a question that I last asked regarding Kanye West almost ten years ago. Is Thomas Piketty the new Bob Dylan?

Let’s examine the evidence. This French novelist-inspired, surrealist economist draws on the traditions of counter-cultural folk economics (eg. Karl Marx aka the Woody Guthrie of the social sciences), upends accepted mores with state of the art technologies of the field (eg. Stata aka the Stratocaster of data computing), and yells “va te faire foutre!” to the data haters while the guardians of the neo-classical flame try to cut off his mic with an axe (eg. the Financial Times aka Pete Seeger).

Hold up. The Financial Times? Let me explain. Indeed, Piketty is now engaged in a cultural battle last witnessed almost a half century ago when Lynyrd Skynyrd and Neil Young went head to head on rock radio airwaves. Arguably, Young fired the first shot, releasing a stinging, and highly empirical condemnation of racism in the southern United States in his song, ‘Southern Man.’ Here’s a typical verse:

I saw cotton / and I saw black, tall white mansions / and little shacks / Southern man, when will you pay them back? / I heard screamin’ and bullwhips cracking / How long? How long?

Southern rockers Lynyrd Skynyrd responded not with empirical research of their own, but rather with a distinctly emotional and objectively foul appeal:

In Birmingham they love the guvnah / Now we all did what we could do / Now Watergate does not bother me / Does your conscience bother you? / Tell the truth

Tell the truth indeed. Racist scum. This song is one of those instances where I feel like white people really become “those people.” As Merry Clayton, one of the black back-up singers on this song, said recently about the recording session for the song, “We knew it was “Sweet Home Alabama.” And we didn’t actually want anything to do with Alabama at that time in our lives.” [Little wonder that in 1971, Clayton released a furiously funky version of Young’s “Southern Man.”]

Well you know who else became one of “those people” last week? The aforementioned economics editor of the Financial Times. Suddenly, he claimed that, most explosively, Piketty had ignored a dataset that seemed to indicate that inequality had actually been dropping in Britain to levels that would make the UK even more egalitarian than Sweden. Then the FT splashed this “finding” across the front page of the newspaper in what basically amounts to academic cyber-bullying, that is, publication without peer-review.

Those people. Leeching off of the hard work of others and claiming their right to journalistic handouts. SMH.

Piketty took six days to release his ultimate smackdown. In this 4000 word response, he elucidates a very clear statistical methodology that also exposes just how erroneous the Financial Times’ approach was. Even the new data set upon which the FT’s case rests is self-described as “experimental.” And not in the good way. I suspect Piketty must have been listening to a recent album by liberal arts-educated hip-hoppers Das Racist. Piketty essentially said to the Financial Times, “Sit down, man. It’s time for a global tax on the wealth of all of these uncivilized wealth-hoarders.” I suspect he hit “send” on his review, and then went out to buy a Shure SM57 microphone just so he could drop it like the economic bad-ass that he is.

As Bruce Springsteen once sang, “we learned more from a three minute record than we ever learned in school.” Well, I sure learned a hell of a lot from these extracurricular experiences this past month. First, I learned to comprehend in such a profoundly joyous way the love that surrounds me and my lovely wife, the “data monster.” And, soon afterwards, when I finished this sick treatise, its last words resounded deep in my soul:

“It seems to me that all social scientists, all journalists, and commentators, all activists in the unions and in politics of whatever stripe, and especially all citizens should take a serious interest in money, its measurement, the facts surrounding it, and its history. Those who have a lot of it never fail to defend their interests. Refusing to deal with numbers rarely serves the interests of the least well-off.” (577)


Shout out to my editor on this piece, aka sickest pape writer ever, aka “the data monster,” aka my wife.

May 15

We have recently learned that a dear friend of ours, Camille Barr, is ill with brain cancer. Camille was a very important friend and mentor to many of us here at Sick Papes, and we wanted to pay tribute to her science and to her contagious love for life.

image

In the hustle and muscle of the quest for the next Sick Pape, it is easy to forget that one of our most important responsibilities as scientists is to train and mentor the next generation of sicksters. It can be a thankless job to babysit and clean up after infant scientists, and yet there are heroes among us who embrace this task with grace and compassion. Today, we recognize one such hero, a singular woman whose kindness, humor, and rigor as a scientist made an indelible mark on the impressionable minds of three fledgling scientists.

Way back in 2006, Camille Barr was a post-doc in Lila Fishman’s lab at the University of Montana. Three of us Sick Papes vets had our first lab technician jobs in Lila’s lab (a 4th worked across the hall with her husband, Scott). We were fresh out of college and excited to learn how to do biology. Camille was the one who taught us how to purify DNA, how to do PCR in 96-well plates, and how to stay on the good side of the evil She-Goblin who ran the sequencing facility. We spent many afternoons working side by side with Camille in the greenhouse, growing the 9,000 seedlings that formed the basis of her big project and today’s Sick Pape.

Barr CM, & Fishman L (2010).
The nuclear component of a cytonuclear hybrid incompatibility in Mimulus maps to a cluster of pentatricopeptide repeat genes.
Genetics, 184 (2), 455-65 PMID: 19933877

Blooming flowers pack a tremendous whallop of human emotion: delicate beauty, tenderness, love and - if incorporated into the right tattoos - a horrifying link to the Army of the Night. But the beauty of floral diversity is more than skin deep.The radiations that spread flowers all over the world also offer a surfeit of human opportunity to understand the molecular mechanisms by which dumb luck and ecological adaptation drive the creation of new species, which are often exquisitely adapted to whatever swamp, mountain ridge or post-apocalyptic amusement park they ended up in.

Since plants can’t move (A. Bruce Saunders, personal communication circa 2005), neighboring flower populations often exist at different points of the species boundary continuum. In the lab, you can test the functionality of these boundaries directly by artificially mating neighboring sub-species of flowers. The resulting flower-children have mosaic genomes and by sampling a small piece of tissue from each plant, it’s possible to attribute each genetic chunk of each mosaic chromosome to either parent species. Some combinations of parent genomes make a functional plant. Others do not, and it is likely that these chunks of the genome are responsible for the divergence between the emerging species. By sampling thousands of plants with mosaic genomes, you can infer statistically what genome combinations are lacking and are thus incompatible. You can also test which “mutt” plants are producing working pollen and which are shooting blanks, and whose sterility will thus be an evolutionary dead end.

Genetic incompatibilities are the fuel of speciation, the final straw. Their biochemical details often highlight near misses in essential processes for building a viable organisms. In essence, protein gears from one species that have mutated just far enough not to fit in the protein cog of the other. Once we know what genetic mismatches build boundaries in the lab, we can dip out, return to the field, smoke some grass, and track how ecology and evolution utilize these particular genes as gears and cogs in the wild.

One epic and historically important stage for tracking flower species boundaries is Iron Mountain in the Southern Cascades in Oregon.  Every spring, just after the snow melts, two neighboring populations of Monkey flowers (Mimulus) come bursting out of the ground (Figure 1). Before she got her own lab at the University of Montana, Lila Fishman, working with John Willis at Duke, had taken Mimulus guttatus and Mimulus nasutus from Iron Mountain, made crosses, and grew up the mutt plants in the greenhouse. One of her startling discoveries was mutt plants carrying a particular piece of nasutus chromosome had sterile pollen (Figure 2). But only when those mutt plants were generated from matings wherein guttatus was used as the “mom”. When nasutus was the mom, the same chunk of nasutus chromosome - called the cytoplasmic male sterility (CMS) locus - left the equivalent mutts with perfectly normal pollen.

image

Figure 1. Mimulus (the yellow flowers) growing on Iron Mountain. Photograph courtesy of Sick Papes.

WTF right?

Turns out in plants, just like in humans, the plant-egg (called a gametophyte) donates all the cell-juice (cytoplasm) to the developing organism. And what essential organelle lives in the cytoplasm and has a genome of its own? The mitochondria! Thus the CMS phenotype arises via a genetic interaction between the mitochondrial and nuclear genome. While both genomes care about the health of the plant, only the nuclear genomes gives a flying Fuze drink about pollen. Another tragedy of male sexuality. Kamikaze mitochondrial mutations that affect the plant’s ability to produce eggs would be a dead end, but mutations that affect pollen production? No biggie for mitochondrial survival to the next generation. But without pollen, the plant population will quickly crash and burn. This situation creates intense evolutionary pressure to select for mutations in the nuclear genome that counteract what went wrong in the mitochondria to restore pollen fertility.

image

Figure 2. Sterile pollen (left) and viable pollen (right) of Mimulus mutts (taken from Fishman and Willis 2006)

And sure enough, some of the Mimulus mutts that should have been pollen sterile were not. These fertile freaks contained a mystery part of their genome that, when it originated from the same species as the mom (guttatus), were completely spunky. Lila called that version of the gene “The Restorer.” In the wild, the current guttatus population has both the pollen-damning mitochondrial mutation and the counterbalancing Restorer mutation so, despite the internal drama, their pollen was normal.

In these experiments, Lila didn’t know what genetic elements constituted the CMS or the Restorer, she only had meaningless genetic markers located close to these genes. Some brave soul needed to ID these friggin’ genes and bring this genomic turf war to the streets.

Enter our hero, Camille. Identifying the gene that constituted the Restorer would yield insights into both the mysterious ways the nuclear and mitochondrial gene products can interact (a basic, largely unknown cell biological phenomenon) and how evolutionary pressures sculpt the genetic landscape and natural history of a natural population of flowers. But for such rich insights, there was a difficult experimental burden. In order to tease apart the large region where the Restorer was situated, and to distinguish guttatus/nasutus alleles, Camille would need to add many more genetic markers. She would need to breed a plant that carried the sterilizing mutant mitochondria and was almost completely nasutus but still had functional pollen. Then the only guttatus genome chunk still present would implicate the genomic region that contained the Restorer.

After creating the markers, Camille undertook a massive breeding experiment, genotyping and pollen phenotyping of over 6,000 plants. She paid particular attention to a large array of pentatricopeptide repeat genes (PPRs), within the region containing the Restorer. PPRs had been implicated in restoring pollen fertility in crop plants. PPRs bind mitochondrial mRNAs and regulate their expression, thus changes in PPR function offer a plausible mechanism to buffer otherwise sterilizing mitochondrial mutations. In a heroic effort of mapping, Camille determined that the Restorer resided in two small PPR clusters very close to each other on the same chromosome. Why two instead of one? Camille and Lila provided evidence that if either one of the clusters was guttatus in origin it was sufficient to Restore. And by further narrowing the candidate genes in each cluster to those whose sequence suggested they would be targeted to the mitochondria, Camille isolated 1 and 6 likely genes in each cluster.

image

Figure 3. Chromosome schematic of the two Restorer PPR clusters in the Mimulus guttatus genome.  PPR genes are shown as rectangles. Black rectangles denotes those PPR genes putatively targeting the mitochondria (form the sick pape itself).

Camille’s results suggest that the PPR genes combat selfish mitochondrial mutations across the plant kingdom. More vividly, they capture evolution in action: the presence of two closely spaced PPR clusters that are sufficient to restore pollen fertility suggest a recent duplication of the restoring region.  These duplications create the PPR diversity essential to buffer future mitochondrial mishaps. Considering that the Mimulus genome, like other plant genomes, is chock full of PPRs, this nuclear/mitochondrial arms race is likely an ancient and powerful force in sculpting the genomic landscape of plants.

After she slayed the Mimulus Restorer, Camille decided to switch things up. Soon after we left Montana, she decided to go to law school. In 2012, she successfully became a practicing attorney, specializing in intellectual law. But even as she was studying law at UC Irvine, Camille would regularly visit the populations of wildflowers she had studied during her Phd. Somehow, in the midst of becoming a high-octane, power-suit-wearing attorney, Camille discovered a new flower species, which she named Nemophila hoplandensis after one of her favorite haunts— the UC’s Hopland Research & Extension Center. Camille’s Pape describing the new species (the white one below), which includes genetic crosses with related Nemophila species and molecular phylogenetics data, will be appearing in an upcoming issue of Madroño, the journal of the California Botanical Society.

image

Thinking back on our days in Missoula, Camille was as bright and exuberant as those thousands of flowers we grew together. Today, reading our names in the compact Acknowledgements section of this Pape, it is difficult to describe the depth of experience hidden in those few sentences, and it certainly does not capture the warmth and affection Camille gave to her crew of sloppy techs.

May 09

Sick Papes (SickPapes) on Twitter -

In response to overwhelming (and legally binding) demands from fanboys and House Republicans alike, we have joined Twitter. So now all the non-Tumblr people can stay up to date on what’s really and truly hot right now. See you there!! 

May 04

 Lee, H., Kim, D., Remedios, R., Anthony, T., Chang, A., Madisen, L., Zeng, H., &amp; Anderson, D. (2014). Scalable control of mounting and attack by Esr1+ neurons in the ventromedial hypothalamus. Nature DOI: 10.1038/nature13169
One of the more terrifying sub-genres of modern neuroscience is the study of animal aggression—specifically, the manipulation of brain circuits that produce unmitigated rage. And it’s no coincidence that David Anderson’s group at Caltech, the ruthless storm trooper horde of the ivory tower, has produced another sick pape that brings us one step closer to the production of ultra-furious super mercenaries.
Humans have been provoking animals for billions of years, but it wasn’t until the pioneering (and brutal) experiments of Phillip Bard in the 1920’s that we realized that animal rage could also be elicited by chopping out specific chunks of the brain. Bard found that surgically removing the cortex of a cat caused it to develop an angry attitude—the kitty would hiss and snarl and attack its previously beloved caretaker/experimenter. This type of behavior was called “sham rage”, because it was not directed at a specifically aggravating stimulus, but presented as a generally foul disposition toward all things great and small.
Using brain lesions in lots of cats, Bard eventually figured out that he could abolish sham rage by disconnecting the hypothalamus and the brainstem, suggesting that the hypothalamus is important for producing aggressive behavior. Walter Hess confirmed this hypothesis a couple decades later, by demonstrating that electrically stimulating the ventromedial hypothalamus is sufficient to produce sham rage. (Walter won the 1949 Nobel Prize for discovering all the crazy things that cats will do when you electrically stimulate the diencephalon.)

Now, this recent pape by Hyosang Lee and colleagues has found a (somewhat) specific group of ventromedial hypothalamus neurons that are responsible for triggering unadulterated rage in the mouse. In a Nickelodeon Guts-esque feat of experimental fortitude, the authors searched for neurons that fire during mouse battles by comparing expression of an activity-dependent transcription reporter (c-Fos) to various cell-type specific markers. This led to the discovery of a population of aggression-associated hypothalamus neurons that express the estrogen-receptor (Esr1). Building a Cre knock-in mouse allowed them to virally transfect EsR1+ neurons in the hypothalamus with light-gated ion channels (ChR2 and Halo). They could then manipulate the activity of Esr1 neurons by shining light on the hypothalamus through an implanted fiber optic.
Surprisingly, the Anderson gang found that optogenetically stimulating these neurons in male mice provoked either carnal advances or attack, depending on the intensity of stimulation. At low intensities, male mice would mount other mice (of either sex), while at higher intensities, they would repeatedly sucker-punch their bewildered cage-mates. They also found that optogenetically silencing these same neurons during normal anger sessions terminated the altercation.
Together, the oppressively thorough experiments in this pape show that EsR1 neurons play a critical role in generating behaviors of passion. Of course, we still have no idea how a single population of neurons in the hypothalamus produces such complicated behavioral sequences—it is likely that they provide a gating signal to the exceedingly complex circuits in the brainstem that produce sequences of motor behavior. Working out this hierarchy is going to be a devilish task.
Let’s conclude with some deep thinking. Although most of us have come to accept the fact that our behavior is completely controlled by a goo-bag of neurons behind our eyes, there is still something implausible about the idea that driving activity in an obscure brain circuit can provoke violence. Watching these videos, you can’t help but consider how you would respond to this same manipulation—is there any amount of “self-control” that could overcome the impulse to attack? And would you feel shame or guilt while being artificially compelled to assault an innocent stranger? These are questions that will need to be answered before we can successfully engineer the hyper-wrathful super-soldiers (post-docs) that our military (the Anderson Lab) needs and deserves.

Lee, H., Kim, D., Remedios, R., Anthony, T., Chang, A., Madisen, L., Zeng, H., & Anderson, D. (2014). Scalable control of mounting and attack by Esr1+ neurons in the ventromedial hypothalamus. Nature DOI: 10.1038/nature13169

One of the more terrifying sub-genres of modern neuroscience is the study of animal aggression—specifically, the manipulation of brain circuits that produce unmitigated rage. And it’s no coincidence that David Anderson’s group at Caltech, the ruthless storm trooper horde of the ivory tower, has produced another sick pape that brings us one step closer to the production of ultra-furious super mercenaries.

Humans have been provoking animals for billions of years, but it wasn’t until the pioneering (and brutal) experiments of Phillip Bard in the 1920’s that we realized that animal rage could also be elicited by chopping out specific chunks of the brain. Bard found that surgically removing the cortex of a cat caused it to develop an angry attitude—the kitty would hiss and snarl and attack its previously beloved caretaker/experimenter. This type of behavior was called “sham rage”, because it was not directed at a specifically aggravating stimulus, but presented as a generally foul disposition toward all things great and small.

Using brain lesions in lots of cats, Bard eventually figured out that he could abolish sham rage by disconnecting the hypothalamus and the brainstem, suggesting that the hypothalamus is important for producing aggressive behavior. Walter Hess confirmed this hypothesis a couple decades later, by demonstrating that electrically stimulating the ventromedial hypothalamus is sufficient to produce sham rage. (Walter won the 1949 Nobel Prize for discovering all the crazy things that cats will do when you electrically stimulate the diencephalon.)

this cat's got the sham rage

Now, this recent pape by Hyosang Lee and colleagues has found a (somewhat) specific group of ventromedial hypothalamus neurons that are responsible for triggering unadulterated rage in the mouse. In a Nickelodeon Guts-esque feat of experimental fortitude, the authors searched for neurons that fire during mouse battles by comparing expression of an activity-dependent transcription reporter (c-Fos) to various cell-type specific markers. This led to the discovery of a population of aggression-associated hypothalamus neurons that express the estrogen-receptor (Esr1). Building a Cre knock-in mouse allowed them to virally transfect EsR1+ neurons in the hypothalamus with light-gated ion channels (ChR2 and Halo). They could then manipulate the activity of Esr1 neurons by shining light on the hypothalamus through an implanted fiber optic.

Surprisingly, the Anderson gang found that optogenetically stimulating these neurons in male mice provoked either carnal advances or attack, depending on the intensity of stimulation. At low intensities, male mice would mount other mice (of either sex), while at higher intensities, they would repeatedly sucker-punch their bewildered cage-mates. They also found that optogenetically silencing these same neurons during normal anger sessions terminated the altercation.

Together, the oppressively thorough experiments in this pape show that EsR1 neurons play a critical role in generating behaviors of passion. Of course, we still have no idea how a single population of neurons in the hypothalamus produces such complicated behavioral sequences—it is likely that they provide a gating signal to the exceedingly complex circuits in the brainstem that produce sequences of motor behavior. Working out this hierarchy is going to be a devilish task.

Let’s conclude with some deep thinking. Although most of us have come to accept the fact that our behavior is completely controlled by a goo-bag of neurons behind our eyes, there is still something implausible about the idea that driving activity in an obscure brain circuit can provoke violence. Watching these videos, you can’t help but consider how you would respond to this same manipulation—is there any amount of “self-control” that could overcome the impulse to attack? And would you feel shame or guilt while being artificially compelled to assault an innocent stranger? These are questions that will need to be answered before we can successfully engineer the hyper-wrathful super-soldiers (post-docs) that our military (the Anderson Lab) needs and deserves.

Apr 03

SickPapes Special on Suren N. Sehgal (1932-2003) and the discovery of the TOR pathway
From:
Vézina, C., Kudelski, A., Sehgal, S.N., 1975. Rapamycin (AY-22,989), a new antifungal antibiotic. I. Taxonomy of the producing streptomycete and isolation of the active principle. J. Antibiot. 28, 721–726.
To:
Laplante, M., Sabatini, D.M., 2012. mTOR signaling in growth control and disease. Cell 149, 274–293. 
In our younger and more vulnerable years, it was exciting to learn &#8220;how things work.&#8221; But, as we&#8217;ve grown older, and gotten more seriously into smoking weed, it is the discovery stories behind &#8221;how things work&#8221; - the ways that people figured it out in the first place - that we find truly spine-tingling. We&#8217;ve said it before, and we&#8217;ll say it again: there is nothing better than choosing a hot-ass research topic, strapping yourself in, and doing a psychadelic literature search all the way back to the beginning to see how it all got started. The euphoria from the resulting &#8220;PubMed High&#8221; is, truly, nature&#8217;s candy.
Case in point: TOR signaling. TOR signaling is a highly conserved pathway that cells use to respond to nutrients and other external signals, and is therefore a central focus for tons of important biomedical research on conditions that involve cell growth and/or nutrition, things like cancer, diabetes, and obesity. There are, quite literally, shit-loads of papes about TOR - trust me, I&#8217;ve smoked them all. 
Given how &#8220;mainstream&#8221; and &#8220;biomedical&#8221; this field is, I was not at all emotionally prepared to learn how the pathway was discovered. To begin, look no further than the name of the pathway: TOR, which stands for &#8220;Target of rapamycin.&#8221; This name refers to the fact that the pathway responds to (i.e. is disrupted by) a drug called &#8220;rapamycin.&#8221; Rapamycin, it turns out, is where the story gets freaky-deaky, and leads us to the trippiest place on earth, Easter Island. And Easter Island, as we know, is where all scientific discoveries ultimately begin. 
In 1964, a team of Canadian microbiologists went to Easter Island, looking for soil microbes that produce natural antibiotics. One of the soil samples they collected contained a bacterial strain which secreted a factor with potent anti-fungal activity. Dr. Suren N. Sehgal and his team named this factor rapamycin, in honor of the local name for Easter Island, Rapa Nui.  
There is a cliche of scientific discovery stories that goes like this: an unsuspecting biologist, studying some relatively obscure organism, winds up identifying a molecule that has wildly important and far-ranging applications. Penicillin is the most famous example, but similar stories are told about Green Fluorescent Protein (from jellyfish; now used to visualize proteins in vivo), thermostable Taq polymerase (from a hot-springs bacteria; now used to amplify DNA), and CRISPR-associated enzymes (from yogurt bacteria, now used to achieve GATTACA-esque dystopic fantasies about modifying babies). 
Of course, by now it has also become almost cliche to point out that this Surprise-turn-NobelPrize narrative is total bullshit, and all of these discoveries were in fact made by excellent, forward-thinking scientists who knew what they were doing. Not to say that there wasn&#8217;t some element of serendipity in how revolutionary such discoveries ultimately became, but it&#8217;s important to emphasize that these discoveries were not lucky one-offs. As many mythbusters have pointed out, even the &#8220;accidental&#8221; discovery of Penicillin was actually done by a guy who had devoted his whole career to identifying anti-bacterial compounds, and involved lots of work by others who are rarely credited. As it has been written before: &#8220;Tenacity frequently precedes rather than follows serendipity.&#8221; 
Point is: Dr. Sehgal did not just &#8220;get lucky.&#8221; While it might be tempting to imagine him as an esoteric microbiologist with no idea how important rapamycin would become, this was not how it went down at all. Dr. Sehgal ran a lab at a pharmaceutical company that was set up specifically to systematically screen for anti-microbial factors produced by other microbes. Once they identified these Easter Island bacteria, they quickly isolated the active compound (rapamycin), figured out how to produce it in quantity, and then discovered that, in addition to it&#8217;s anti-fungal properties, rapamycin worked in mammals as a powerful immunosuppressant. (Rapamycin was eventually turned into a drug to help suppress the immune system after organ transplants.) Rapamycin was soon discovered to also suppress the proliferation of some kinds of tumors. 
In the 50 years since the discovery of rapamycin, an enormous number of researchers have worked to identify the pathway that is targeted by rapamycin (the TOR pathway), and have begun to figure out the complex ways that this this pathway links external signals (like nutrition) to control of the cell cycle and other basic metabolic processes. This explains how rapamycin suppresses tumor growth (by blocking the progression of the cell cycle), and it suppresses the immune system (by blocking the proliferation of immune cells in response to antigens). That&#8217;s what we call a &#8220;hot pathway.&#8221; 
To read more about Suren N. Sehgal, check out this moving tribute to his research and life, which celebrates &#8220;his life and his contributions to mankind.&#8221;  

SickPapes Special on Suren N. Sehgal (1932-2003) and the discovery of the TOR pathway

From:

Vézina, C., Kudelski, A., Sehgal, S.N., 1975. Rapamycin (AY-22,989), a new antifungal antibiotic. I. Taxonomy of the producing streptomycete and isolation of the active principle. J. Antibiot. 28, 721–726.

To:

Laplante, M., Sabatini, D.M., 2012. mTOR signaling in growth control and disease. Cell 149, 274–293. 

In our younger and more vulnerable years, it was exciting to learn “how things work.” But, as we’ve grown older, and gotten more seriously into smoking weed, it is the discovery stories behind ”how things work” - the ways that people figured it out in the first place - that we find truly spine-tingling. We’ve said it before, and we’ll say it again: there is nothing better than choosing a hot-ass research topic, strapping yourself in, and doing a psychadelic literature search all the way back to the beginning to see how it all got started. The euphoria from the resulting “PubMed High” is, truly, nature’s candy.

Case in point: TOR signaling. TOR signaling is a highly conserved pathway that cells use to respond to nutrients and other external signals, and is therefore a central focus for tons of important biomedical research on conditions that involve cell growth and/or nutrition, things like cancer, diabetes, and obesity. There are, quite literally, shit-loads of papes about TOR - trust me, I’ve smoked them all. 

Given how “mainstream” and “biomedical” this field is, I was not at all emotionally prepared to learn how the pathway was discovered. To begin, look no further than the name of the pathway: TOR, which stands for “Target of rapamycin.” This name refers to the fact that the pathway responds to (i.e. is disrupted by) a drug called “rapamycin.” Rapamycin, it turns out, is where the story gets freaky-deaky, and leads us to the trippiest place on earth, Easter Island. And Easter Island, as we know, is where all scientific discoveries ultimately begin. 

In 1964, a team of Canadian microbiologists went to Easter Island, looking for soil microbes that produce natural antibiotics. One of the soil samples they collected contained a bacterial strain which secreted a factor with potent anti-fungal activity. Dr. Suren N. Sehgal and his team named this factor rapamycin, in honor of the local name for Easter Island, Rapa Nui.  

There is a cliche of scientific discovery stories that goes like this: an unsuspecting biologist, studying some relatively obscure organism, winds up identifying a molecule that has wildly important and far-ranging applications. Penicillin is the most famous example, but similar stories are told about Green Fluorescent Protein (from jellyfish; now used to visualize proteins in vivo), thermostable Taq polymerase (from a hot-springs bacteria; now used to amplify DNA), and CRISPR-associated enzymes (from yogurt bacteria, now used to achieve GATTACA-esque dystopic fantasies about modifying babies).

Of course, by now it has also become almost cliche to point out that this Surprise-turn-NobelPrize narrative is total bullshit, and all of these discoveries were in fact made by excellent, forward-thinking scientists who knew what they were doing. Not to say that there wasn’t some element of serendipity in how revolutionary such discoveries ultimately became, but it’s important to emphasize that these discoveries were not lucky one-offs. As many mythbusters have pointed out, even the “accidental” discovery of Penicillin was actually done by a guy who had devoted his whole career to identifying anti-bacterial compounds, and involved lots of work by others who are rarely credited. As it has been written before: “Tenacity frequently precedes rather than follows serendipity.” 

Point is: Dr. Sehgal did not just “get lucky.” While it might be tempting to imagine him as an esoteric microbiologist with no idea how important rapamycin would become, this was not how it went down at all. Dr. Sehgal ran a lab at a pharmaceutical company that was set up specifically to systematically screen for anti-microbial factors produced by other microbes. Once they identified these Easter Island bacteria, they quickly isolated the active compound (rapamycin), figured out how to produce it in quantity, and then discovered that, in addition to it’s anti-fungal properties, rapamycin worked in mammals as a powerful immunosuppressant. (Rapamycin was eventually turned into a drug to help suppress the immune system after organ transplants.) Rapamycin was soon discovered to also suppress the proliferation of some kinds of tumors. 

In the 50 years since the discovery of rapamycin, an enormous number of researchers have worked to identify the pathway that is targeted by rapamycin (the TOR pathway), and have begun to figure out the complex ways that this this pathway links external signals (like nutrition) to control of the cell cycle and other basic metabolic processes. This explains how rapamycin suppresses tumor growth (by blocking the progression of the cell cycle), and it suppresses the immune system (by blocking the proliferation of immune cells in response to antigens). That’s what we call a “hot pathway.” 

To read more about Suren N. Sehgal, check out this moving tribute to his research and life, which celebrates “his life and his contributions to mankind.”