Tuesday, December 30, 2014

Remembering David Yue

My friend and colleague, David Yue, Professor of Biomedical Engineering at Johns Hopkins, suddenly passed away in his laboratory on Tuesday December 23, 2014.  This is to his memory.


On Saturday my phone rang.  I went to pick it up, and was surprised that is said "David Yue".  Puzzled, I thought wouldn't it be amazing if it were my dead friend calling?  I answered it, tentatively saying "hi", and heard the voice of David’s wife, Nancy.

She was calling to ask if I could speak at his memorial.  I said of course and asked how she was doing.  She said that it gets better with every passing day.  Just the night before she and the three boys had opened the presents that David had placed under the Christmas tree.  One of the presents got the four of them laughing. "It was so David", she said.  "He gave me 27 years of happiness".

David gave a quarter century of happiness to us, his colleagues and students at the Biomedical Engineering Department at Hopkins.  With his hands, he constructed some of the pillars of our undergraduate education, a course called Systems Bioengineering, and a course called Ion Channels.  What was it like to be a student in his class? 

Joseph Greenstein writes:  “I go back nearly twenty years to when I was a student taking his class on ion channels. At the time he was reading a biography of Sir Isaac Newton, and David shared his favorite bits with us in class. David took to using the phrase made famous by Newton, “standing on the shoulders of giants,” when referring to Hodgkin, Huxley, and many other pioneers of quantitative electrophysiology. Beyond his extraordinary skill and passion as a scientist, he had a rare talent for transforming explanations of biological mechanism into engaging, eloquent stories. I recall him likening a CaMKII molecule hovering over a calcium channel to the massive mother ship in the movie Independence Day hovering over a city. As I teach students how to model channel gating and pursue my own lines of research, David is the giant whose shoulders I stand on.”

Gerda Brietwieser writes: “His explanations of the electrocardiogram were the clearest and most beautiful I have ever heard. He was a stellar scientist and a wonderful human being.”

He mentored over 30 PhD students and postdoctoral fellows.  What was it like to be a student in his lab?  Let’s hear David describe it, in this 2002 note that he wrote for his student Carla DeMaria:

“Carla, you have had a ride to remember the past few years in lab, forging friends, colleagues, science, and truth.  Thank you for sharing a path of self-discovery borne of waging exciting battles together, and laboring arm-on-arm with courage.  Treasure these years, as I will.  They have nurtured you with a strength that will sustain you in life and work.“

Manu Ben-Johny, a graduate student in his lab writes: “He was an incredible mentor and an exceptionally kind and generous person. He was always there to help us - whether it was b/c our electrophysiology rigs weren't air-floated properly or if it was a personal struggle we needed advice on. He spoke with eloquence and enthusiasm about science that was truly inspirational. His absence leaves behind a large void in my heart but I know his memories will continue to guide my life.”

What was it like to be one his colleague?  When one of his sons started taking physics at college, David began re-learning thermodynamics on his own.  One morning he walked into my office, wearing his black shirt, the coffee cup from the 3rd floor cafe in his hands, the rainbow colored band that held his badge around his neck, and started telling me about a fundamental equation.  I joined him at my black board, handed him a piece of chalk, and asked him to start at the beginning, because I wanted to understand it too.

Together, David and I built the PHD program from 50 students, to nearly 200.  Just a couple of days ago, I was sitting in my office and thinking of ways to organize this year’s admission process.  I had a thought, and like always, I wanted to bounce it off David.  Does this idea make sense David?  If we do it this way, would it make a difference?  I was about to leave my chair and go up to the 7th floor, to find him in his office, but then I sat down.

One of the worst things about growing old is that you lose people that you love.  And so it is for me.

John Keats wrote: "A thing of beauty is a joy forever: its loveliness increases; it will never pass into nothingness."  And so it is with David.




Friday, June 27, 2014

Effort of movements in Parkinson’s disease

The very terms that we use to describe the motor symptoms of Parkinson’s disease (PD) imply a subjective scaling of time and space: bradykinesia (slowness of movement), tachyphemia (cluttering of speech), and micrographica (smallness of handwriting).  Although these symptoms are stable features of the disease, a remarkable property of PD is that under some conditions the symptoms can spontaneously improve.

In 1965, R.S. Schwab and I. Zieper, two neurologists at the Massachusetts General Hospital, described the case of a 62-year old male PD patient who exhibited severe tremor and severe rigidity and was totally dependent on his wife.  His wife would start her day by dressing him, laying out his breakfast, making his lunch, go to work, then come back in the afternoon to make his dinner and finally get him undressed and ready for bed.  One evening his wife had severe abdominal pain and had to be taken to the hospital for emergency surgery.  The next day she woke worried about her husband, and was surprised when the nurse told her that he had come to visit her.  He had dressed himself, made his own breakfast, and then took a taxi to the hospital.  At the hospital his neurologist noticed him and upon examination found that he was able to walk 50% faster than in past examinations.  “All his motor tests were improved in spite of the presence of the same amount of rigidity and tremor that had been present before.”

A second case was another elderly male with advanced stage PD with severe rigidity who was confined to a wheelchair, unable to walk alone, living on the first floor of his home in Providence, RI.  A hurricane approached the city and his wife left to get some supplies from the drugstore.  “As a result of the storm the harbor overflowed 10 feet into the street.  The patient, sitting in his wheelchair, suddenly saw the door blown in and a wall of water entered the house.  Exactly how he did it is not clear, but he managed to get out of his wheelchair and climbed the steps to safety on the second floor where he was found several hours later by his wife, the waters having subsided. She found him seated in a chair as helpless as he was before.”

While these examples are anecdotal, there are other more controlled instances in which the PD patients show marked improvements in their movements.  One example of this is in the movements that are made during sleep.  Although healthy people do not move during REM sleep, people with PD sometimes experience REM sleep behavior disorder (RBD).  Valerie Cochen De Cock and her colleagues studied movements made during sleep by PD patients and reported that the movements were “surprisingly fast, ample, coordinated and symmetrical, without obvious signs of parkinsonism”.  They found one patient singing a song with a “strong and sonorous voice, a wide smile on his face” (he used to sing before his PD), another “declaiming political speeches with a loud voice” (he used to give speeches at the town council), another “shouting and getting hold of a heavy oak table and throwing it across the room”, and another “fighting with an invisible foil, with great agility” (apparently to save his lady-love from an attacking knight).

The mechanisms with which the brain of a Parkinsonian patient produces these feats remain a complete mystery.  But these observations do hint that latent in the PD brain is the ability to make fairly normal movements.  Yet, the movements are apparently unavailable for expression except under extraordinary circumstances.  Why?

Neuroeconomics of movements

Pietro Mazzoni, Anna Hristova, and +John Krakauer studied this question by asking PD patients and healthy controls to reach with their dominant (and more affected) arm to a target.  Visual feedback for the hand was removed at reach onset, and at the end of each reach the volunteers were given feedback with regard to the speed and accuracy of their movement.  Crucially, the trial had to be repeated if the speed was outside the requested range.  The authors found that for a given reach velocity, the endpoint accuracy of the movements made by the PD patients was similar to controls.  This again illustrated the latent abilities of the patients.  However, the patients required many more attempts in order to produce a reach that was as fast as the requested speed.  That is, the patients were capable of producing movements of normal speed and accuracy, but it took them more trials to become motivated to make the fast movements.  The authors proposed that under normal conditions, the patients seem to lack the “motor motivation” that healthy people possessed in generating their movements.

I have suggested that one way to view this result is to consider the possibility that in the brain, each movement is a balance between two factors: the reward that one expects to acquire at the end of the movement, and the effort (or motor cost) that will be spent in generating that movement (Shadmehr et al., Journal of Neuroscience, 2010).  The reward that we expect to acquire represents the subjective value of the movement.  For example, if you see a dear friend, the subjective value for the steps that you are about to take toward your friend are higher than if you are walking to greet someone that you may not be so fond of.  As a result, you will walk faster toward the dear friend.  (I have often thought that to examine how my brain currently values people in my life, I should measure the speed at which I walk toward them.)

Indeed, humans and other animals tend to move faster toward things that they value more.  This was first illustrated by Okihide Hikosaka and his colleagues in saccadic eye movements of monkeys.  In these experiments, thirsty monkeys were trained to move their eyes to a location in exchange for a reward (juice).  In some blocks of trials, the juice volume was a little larger, and in some blocks the volume was a little smaller.  The peak velocity of the saccadic eye movements in blocks in which there was more juice at stake was larger.  That is, the monkey’s eye movements were faster when the subjective value of the movement was higher.

In the real world we do not make saccadic eye movements in exchange for juice.  Rather, we move our eyes to place the part of the visual scene that we are interested in examining on our fovea.  Do we make faster saccades to things that we value more?  In humans, this idea was first illustrated by my former student +Minnan Xu-Wilson.  She asked people to make a saccadic eye movement to spots of light, but after the saccade was completed she ‘rewarded’ them by showing them a picture of a face, an object, or simply a noisy picture.  She found that saccades that were made in anticipation of viewing a face were faster.

These experiments illustrate that one of the factors that influences the speed by which we move, that is the vigor of our movements, is the subjective value of the reward that we expect to attain at the end of the movement.  The higher this expected value, the faster the movement.

The second factor is the subjective cost of the effort that is required to make the movement.  If the subjective value of the reward associated with two potential movements is the same, people pick the movement that requires less effort.  

Now suppose that we have to move a given distance.  How does the brain decide on the speed of the movement?  The faster the speed with which we move to cover that distance, the greater force we have to produce.  If effort is related to force (perhaps because of metabolic cost of generating force), then the subjective cost of effort will be higher for the faster movements as compared to the slower movements that cover the same distance.  So if we move slower, we will produce smaller forces with our muscles and have a lower subjective cost of effort. 

However, the slow movement will bring us to our goal later.  Time discounts reward.  That is, it is better to arrive at a valuable state sooner rather than later.  So the subjective value of the movement drops if we arrive later at the destination, making it better to move fast so we get to our goal sooner. 

In summary, the subjective cost of effort makes it better to move slow so we produce smaller forces, but passage of time makes reward less valuable.  These two factors compete and the movement that the brain produces appears to be one that is the best possible given these two competing factors.  That is, the speed at which we move is one that produces the smallest possible effort (encouraging us to move slow), while at the same time maximizing the subject value of the reward we hope to attain (encouraging us to move fast).

Dopamine disorders alter the neuroeconomics of movements

In Parkinson’s disease, some of the neurons in the substantia nigra, a nucleus in the basal ganglia, gradually degenerate and die.  These neurons provide dopamine to much of the brain, and in particular the striatum, another region of the basal ganglia.  Dopamine appears to play a critical role in regulating the two factors that control movements: subjective value of reward and subject cost of effort.

In the course of the last two decades, John Salamone and his colleagues have been investigating the effects that loss of dopamine has on behavior of rats.  When rats are offered a choice between pressing a lever a few times to obtain good food, vs. eating a less preferred food for which they do not have to press levers, they choose to spend the effort and press the lever to get the preferred food, but only if the lever pressing requires modest effort.  But when a drug is injected into their basal ganglia that acts as an antagonist to dopamine, the rats become less willing to press the lever and forego the better food, settling for the less effortful choice.  On the other hand, if a drug is injected that enhances action of dopamine, the animal becomes more willing to press the lever, even if it has to press it many times in order to earn the better food.  

Therefore, it appears that when dopamine’s actions are disrupted, the balance between subjective value of reward and cost of effort shifts.  Loss of dopamine shifts the balance by increasing cost of effort and decreasing value of reward, whereas increase of dopamine shifts the balance by decreasing cost of effort and increasing the subjective value of reward.  

In this framework, loss of dopamine in PD shifts the neuroeconomics of movements towards ones that have smaller effort costs, which include movements that are slow. This speculation would not explain why certain movements of the patients are better during REM sleep, but does provide a framework for understanding the paradoxically fast and able movements that they exhibit under extraordinary circumstances: perhaps under these conditions, a greater proportion of available dopamine is engaged, increasing the expected reward for the movement, countering the effort costs.

Sunday, April 20, 2014

Breaking habits and erasing memories

The summer day on Okinawa Island of Japan was so warm that those us who were there to teach had given up on going to the beach in the afternoon and instead had decided to try some early morning tennis.  On this particular early morning, a distinguished scientist and friend had joined our group, and there he was holding the ball and warming up for his serve.  As he tossed the ball up, he bent his body sideways and then back and twisted upwards and finally had his racquet make contact with the ball, delivering a pretty good serve. 

I stood there marveling at how he had learned to serve this way.  He said: “Well, I learned on my own, and despite lots of coaches who have tried to break it down and rebuild it, I haven’t been able to change it much.”




Memories, like that of how to hit a tennis serve, can become so persistent that the brain seems unable to change them.  This, at the surface, may not appear that important, as the cost is looking a little silly and not being able to do something as efficiently as possible.  But what if you are traveling with family on a peaceful day and stop at a gas station, and suddenly the smell of petroleum brings back memories of combat, paralyzing you with fear?  What if you are watching a movie where the hero is climbing the face of a rock and when she reaches the peak, she stands and looks down, and you find your knees shaking?  Does the brain have a mechanism in place to rebuild or even erase unwanted habits and fear-inducing memories?

Until about 15 years ago, it was generally assumed that when the brain learns something new, the newly acquired memory is initially in a labile state and can be readily changed, but after a short period of time (hours), it becomes ‘consolidated’, meaning that it becomes resistant to change.  For example, when rats were given a single pairing of a tone with a food-shock, this made it so that the next time they heard that tone, they got scared and stopped moving.  If a drug was given to them that disrupted the molecular pathways that are involved in consolidation (protein synthesis inhibitors), the next day when they heard the tone they were not scared of it.  However, the drug had to be given soon after the animal’s first experience of the tone-shock pairing.  If it was given even a few hours after the first experience, it did not have much of an effect; the animal still feared the tone.  And so it seemed that once an emotional or fear-inducing memory was acquired, there was little that could be done to change it.

The basis for this idea was a century of work that had described how memories form.  Neurons communicate with each other via their synapses, tiny junctions where one neuron sends and receives messages from another neuron.  Eric Kandel, a Columbia University neuroscientist, had shown that short-term memories, things that last for a few minutes, are due to transient changes to the synapse to make it more efficient, but these changes were sustained only for a short period of time.  To make memories last, the changes at the synapse had be sustained indefinitely, and this required manufacture of new proteins.  If the initial experience was strong enough, with passage of time these new proteins were made by the neuron and the memory was maintained, apparently becoming permanent.

But in the year 2000, Karim Nader, an Egyptian born neuroscientist who was raised in Canada, made a discovery that completely overturned this idea.  He was working in Joseph LeDoux’s laboratory in New York University where he took rats and gave them a single pairing of a tone with a foot-shock, and indeed, the next day he found that when they heard the tone, the rats froze in their tracks (rats express fear by ‘freezing’).  However, right after they heard this tone, he injected into their amygdala (a region of the brain involved in storing fearful memories) a drug that inhibits protein synthesis.  Amazingly, he found that a day later, when they heard the tone their fear was reduced by half (measured by the time spent ‘freezing’).  Interestingly, if the drug was given without the reactivation of the memory (that is, on day 2 don’t play the tone), it had no effect.   And if the animal heard the tone but 6 hours later was given the drug, it still feared the tone the next day.  So the key idea was that the fear-inducing memory could be weakened if the drug was given right after the memory was reactivated, but it could not be weakened if the drug was given alone, or if the memory was reactivated but without the drug.

Unfortunately, protein synthesis inhibitors cannot safely be given to humans, and so until recently, it was unclear whether this new understanding could be applied to fear-inducing memories in people.  In 2009, Merel Kindt and colleagues in Amsterdam asked a few undergraduate students to look at a picture of a spider and then a few seconds later played a loud sound, followed by a mild shock to the hand.  When the students heard the loud sound, they had a startle reflex, producing an eye blink.  They also showed them a picture of another spider, followed by another loud sound, but no shock to the hand.  So the students learned to fear the picture of the 1st spider, but not the second.  The amount of fear was measured by how they reacted to the loud sound.  Indeed, the students feared the 1st spider more than the second.  The students returned on Day 2, and Kindt showed them the picture of the 1st spider, but did not shock them.  Right after this, they gave them a drug called propranolol, which is often used to prevent stage fright, and works to inhibit actions of norepinephrine.  When the students returned on the next day, they did not show fear of the spider.  Importantly, if they gave the drug but did not show them the picture of the spider, the fear-inducing memory remained. 

So it seems possible that in humans, certain fear-inducing memories can be weakened by a combination of reactivation of that memory and consumption of certain drugs like propranolol.  Later work from the Kindt group showed that the key step is that during recall of the memory, there must be a prediction error.  That is, during recall, the brain appears to predict that a bad thing is going to happen (a shock), and if it does not happen, and the drug is present, then the memory is weakened.  Both the prediction error and the presence of the drug seem to be required, as one without the other is much less effective. 

These approaches are now being studied for treatment of PTSD.  In a recent study, propranolol was given to people who were involved in a serious car accident.  Those people were less likely to develop PTSD symptoms in the following 3 months compared to people who were given placebo. 

Notice, however, that all the successes have been on weakening newly formed memories.  What about the old fear-inducing memories?  The news there is less clear.  Older memories may be less likely to be affected when they are reactivated.   Which brings me to one of my favorite quotes from Margaret Thatcher, who was quoting her father when she said:

Watch your thoughts for they become words.
Watch your words for they become actions.
Watch your actions for they become habits.
Watch your habits for they become your character.
And watch your character for it becomes your destiny
.

References
Kindt M, Soeter M, Vervliet B (2009) Beyond extinction: erasing human fear responses and preventing the return of fear. Nature Neurosci 12:256-258.
Nader K, Schafe GE, Le Doux JE (2000) Fear memories require proten synthesis in the amygdala for reconsolidation after retrieval. Nature 406:722-726.
Sevenster D, Beckers T, Kindt M (2013) Prediction error governs pharmacologically induced amnesia for learned fear. Science 339:830-833.


Sunday, March 16, 2014

The puzzle of menopause

Human females appear to be unique in the animal kingdom in that they live far beyond the end of their fertility period.  Typically, menopause occurs in the 4th decade of life, and women can expect to live to their 8th decade.  In men, however, fertility continues to near the end of life.  In men, although there are clear declines affecting the endocrine system, testicular function, and structure of the sperm chromosomes, there appears to be no andropause, that is, men retain a significant probability of fertility, but not women.  (In 1935, three physicians reported what may be the oldest American father on record, a 94 year old North Carolina man who married a 27 year old widow and fathered a child.)  

In contrast to humans, in chimpanzees fertility continues in both females and males until near the end of life.  That is, whereas in women menopause is a mid-life event, in chimpanzee females it is a late-life event.  Why?

A genetic wall of death beyond the fertility years
In 1966, W.D. (Bill) Hamilton, a just minted PHD student in biology, who would  later be called "nature's oracle" because of his mathematical reasoning, and whose work would  lay the foundation for the "selfish gene" of Richard Dawkins, used a mathematical model of genetics to demonstrate that from an evolutionary standpoint, genes that protect against disease and expand the lifespan beyond the age of fertility would tend to be eliminated with natural selection, and so animals should not live much longer than their end of fertility.  

Bill's argument went as follows: imagine four genes that are expressed in females and give immunity against some lethal disease but are expressed only in one particular part of life.  The first gene is expressed in the 1st year of life, the second gene in the 15th year, the third gene in the 30th year, and the fourth gene in the 45th.  Now imagine that fertility ends before age of 45.  If so, the fourth gene confers much less advantage than the first three.  This model explained the fertility-age relationship in men, but could not explain why women lose their fertility at around the midpoint of their life.

Evolutionary biologists have been puzzled by the fact that human females have escaped this “wall of death” that, at least theoretically, looms after menopause, and appears to be present in many other animals.  Numerous theories have been offered.  Perhaps in the past, human longevity was too short for females to experience menopause (defined as surviving for at least one year in good health beyond the last menstrual cycle), and so menopause is a byproduct of increased longevity unique to humans.  Perhaps by entering into menopause, older mothers increased the survival probability of their children and grandchildren (grandmother effect).  Perhaps reproductive aging was more severe than somatic aging, and so unlike other functions that could proceed at less than some high level of accuracy, reproduction in females could not, and therefore stopped when a threshold level of accuracy was reached. 

The jury is still out on whether any of these theories are supported by evolutionary data.  However, the most interesting new hypothesis proposes that women experience menopause at mid-life because of behavior of men.

Male sexual preference may lead to female menopause
In 2007, Shripad Tuljapukar and colleagues revisited Hamilton’s mathematical model of human evolution and like Hamilton assumed that there were genes that gave resistance to fatal diseases at certain age of life.  Unlike Hamilton, they assumed these genes existed in both males and females.  They added to Hamilton’s model a matrix representing mating preference.  In this matrix, \$ M_{i,j} \$ represented the probability of a male age \$ i \$ to mate with a female of age \$ j \$, and if both were fertile, to produce an offspring.  They found that if there was a gene that gave resistance to a fatal disease at say the 45th year in women, and this gene did the same thing in men, then both men and women would benefit from this gene because the older men would continue to be fertile and produce babies with the younger women. The interesting idea was that selection would favor survival of both males and females as long as one of the two groups could reproduce with the fertile sub-population of the other group.

But this idea was not entirely satisfactory because the same model would predict that it was better if females could extend their fertility period and like males, never experience menopause.  Sure, having one group live longer than the menopause age of another group would make both groups live longer, but why did natural selection produce menopause in females, but not males?  That is, what is the origin of female menopause in the first place?

In 2013, Richard Morton and colleagues used a similar mathematical model of genetic evolution, but started with the assumption that prolonged fertility was the ancestral state of both males and females.  That is, they assumed that at some distant past, neither males nor females experienced menopause.  They also assumed existence of a sex-specific infertility causing mutation in the genome which would produce menopause.  They asked about the conditions that might lead to this gene being expressed early in females, but not males.  

They found that if males and females had no preference for the age of their partner, then infertility-causing mutations would not become sex specific.  That is, if the age of the partner did not matter to a male or a female, reflected in the matrix \$ M_{i,j} \$, then both males and females would remain fertile into old age.  However, if males preferred younger females, then something interesting happened: female fertility declined without a loss in their longevity, resulting in female menopause, but male menopause never occurred.  The interesting idea was that a male preference for mating with a younger female would specifically affect fertility in females, limiting it and producing menopause.

An amazing prediction of this model is that evolution could have proceeded in a very different path: if females had shown a preference for mating with younger males, then fertility would have declined in the older males, resulting in male menopause, while allowing females to maintain fertility into old age.


Ramajit Raghav, an Indian man who was reported to have fathered a child at 97 years of age.

References
R. Caspari and S.H. Lee (2004) Older age becomes common late in human evolution. Proceedings of the National Academy of Science 101:10895-10900.

W.D. Hamilton (1966) The moulding of senescence by natural selection. Journal of Theoretical Biology 12:12-45.

J.G. Herndon et al. (2012) Menopause occurs late in life in the captive chimpanzee (Pan troglodytes) Age 34:1145-1156.

R.A. Morton, J.R. Stone, R.S. Singh (2013) Mate choice and the origin of menopause. PLoS Computational Biology 9:e1003092.

F.I. Seymour, C. Duffy, and A. Koerner (1935) A case of authenticated fertility in a man aged 94. Journal of American Medical Association 105:1423-1424.

S.D. Tuljapurkar, C.O. Puleston, and M.D. Gurven (2013) Why men matter: mating patterns drive evolution of human lifespan.  PLoS One 8:e785.

Saturday, January 25, 2014

Why support curiosity driven basic research?

A colleague, recently starting as an assistant professor, with a new laboratory and bright young graduate students, seemed unusually stressed.  I pried, guessing that in the month of January the well of worry for most biomedical scientists is the looming deadline for submission of grant proposals to the National Institutes of Health.  With exacerbation, he said: “The funding line is now less than 10%.  How do I keep my lab open?” 

These days this is a common question, even in elite universities.  Each year tens of thousands biomedical scientists send in a new R01 proposal to the NIH, competing for that small piece of the US budget that has been set aside to fund ‘curiosity-driven’ basic research --- research conducted by independent, often single investigators.  These proposals represent a most remarkable channel for which a small portion of the US budget is allocated: the government allows scientists with a laboratory that houses often only a few students to describe their idea, and then have the peers of those scientists evaluate these ideas and rank them, funding the top 10% or so. 

In contrast to this curiosity-driven basic research is the ‘mission-driven’ research that the government funds, focusing on themes like the Human Genome Project, or the Brain Mapping Project, organized efforts to answer a specific question.  My young friend was facing the existential struggle that is faced by all small, independent laboratories: to research their own questions, rather than the ones that the government dictates.  This struggle has a surprisingly long history.

The day after the bomb

On Tuesday, August 7, 1945, the New York Times printed in giant letters: First atomic bomb dropped on Japan.  Below the headline were reports on speeches made by Truman and Churchill: “New age ushered”, and the report that when the bomb was first tested, it had vaporized a steel tower in the New Mexico desert.  [A small advertisement on page 2 touted a Manhattan bar that had just installed air conditioning, providing a cool relief from the hot NY summer.]

 
But deep inside the newspaper, in the editorial section, there was a paragraph that more than any other foretold the struggle that was coming.  Not the struggle for liberty and the war against dictators and despots, but the struggle for funding of basic science in the United States.

In its editorial section, the NY Times used the success of the Manhattan project to exemplify the merits of organized, mission-driven research “after the manner of industrial laboratories.”  It used the success of the bomb to lambast university professors that held that “fundamental research is based on curiosity”.  It concluded that the path forward was for the government to state the problem, and then solve it by “team work, by planning, by competent direction and not by a mere desire to satisfy curiosity.”  


The Manhattan project set a shining example.  Why not do the same for other important problems?  Why not a Manhattan project to cure heart disease, or Parkinson’s disease?

The struggle to fund curiosity driven research

Just two weeks before the bomb was exploded a report first commissioned by President Roosevelt but with his untimely death, now sent to President Truman, had expressed a different view, one that championed curiosity driven research.  In that July 1945 report, titled Science, The Endless Frontier, Vannevar Bush had written: “Basic science is performed without thought of practical ends, and basic research is the pacemaker of technological improvement.”

Vannevar Bush, Dean of engineering at MIT from 1932-38, convinced President Roosevelt to form the National Defense Research Committee to coordinate scientific research for national defense, which he served as chairman.  By 1941, NDRC became part of Office of Scientific Research and Development, which coordinated the Manhattan Project.  OSRD, under Bush’s directorship, did something revolutionary: scientists were allowed to be ‘chief investigators’ on projects related to the war effort.  Rather than working in a national lab, or being employed by the government, they would stay at their universities, assemble their own staff, use their own laboratories, and then make periodic reports to committees at OSRD.  James Conant, a member of one of these committees, would later write: “Bush’s invention insured that a great portion of the research on weapons would be carried out by men who were neither civil servants of the federal government nor soldiers.”   This idea fundamentally changed research in the US, de-centralizing it, moving it away from industrial and government labs, and placing it at universities.

In 1944, as the war in Europe neared its end, Bush was called into Roosevelt’s office and there, the President asked him: “What’s going to happen to science after the war?” Bush replied: “It’s going to fall flat on its face.” The President replied: “What are we going to do about it?” 

In November 1944, this question was put down formally in a letter from Roosevelt to Bush and OSRD.  The letter asked four questions: 1) How would the US make its scientific achievements of the war years “known to the world” in order to “stimulate new enterprises, provide jobs, … and make possible great strides for the improvement of the national well-being”? 2) How would medical research be encouraged? 3)How could the government aid private and public research, and how should the two be interrelated? and 4) How could the government discover and develop the talent for scientific research in America’s youth?

Bush believed that advances in fundamental science had paid off spectacularly, resulting in new weapons and new medicines.  “[He] believed that you had to stockpile basic knowledge that could be called upon ultimately for its practical applications, and that without basic knowledge, truly new technologies were unlikely to emerge.”

Bush’s ideas took hold, eventually leading to establishment of the National Science Foundation and the NIH, and the current mechanisms that fund basic science in the US.  But the question persisted: should scientists be allowed to define their own questions in basic science, or should the government organize them into teams that go after mission-driven problems?

From pond scum to the human brain

In 1979, in a Scientific American article, Francis Crick (co-discoverer of DNA) suggested that a fundamental problem in brain sciences was to control a single neuron.  He speculated that if single neurons could be controlled, particularly in the mammalian brain, a critical barrier would be crossed to understand both the function of each region of the brain, and the mechanisms necessary to battle neurological disease.  

Crick did not know it at the time, but basic scientists, doing curiosity-driven research, had already found the key piece of the puzzle in an unlikely place, pond scum.  There, in single cell microbes, there was evidence that light-sensitive proteins regulate the flow of electric charge across the cell membrane (allowing the microbe to respond to light and move its flagella).  Thirty years later, building on these basic, seemingly useless results, Karl Deisseroth put the puzzle pieces together, showing how to use light to control single cells in the primate brain, producing a new field of neuroscience called optogenetics.

In a 2010 article, summarizing the remarkable insights gained by his work, Karl Deisseroth reflected on his findings.  He wrote: "I have occasionally heard colleagues suggest that it would be more efficient to focus tens of thousands of scientists on one massive and urgent project at a time --- for example, Alzheimer’s disease --- rather than pursue more diverse explorations.  Yet the more directed and targeted research becomes, the more likely we are to slow overall progress, and the more certain it is that the distant and untraveled realms of nature, where truly disruptive ideas can arise, will be utterly cut off from our common scientific journey." 

Sources
Jonathan R. Cole (2010) The Great American University: its rise to preeminence, its indispensable national role, why it must be protected. PublicAffairs.

Karl Deisseroth (2010) Controlling the brain with light.  Scientific American, November, page 49-55.