To what extent can we break things down and still expect to see the big picture?
By Padideh Kamali-Zare, PhD
What Freeman Dyson said in Imagined Worlds is certainly true for brain research as well which is that: “New directions in science are launched by new tools much more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained.’’
But can we really pay enough attention to concepts and forms in the scientific world entirely structured by tools and technologies? Or can we expect the new concepts to emerge automatically in the world ruled by tools? If the answer to both questions is no, what is the problem and what should we do?
It is of course valuable to have a wide range of sophisticated tools and techniques in research. They help us test different hypotheses and address questions that would not be answered otherwise, no matter how elaborate our thinking and fundamental theories are. Especially in fields related to complex systems such as neuroscience and in particular brain research. It is only with the help of those tools that we can access different aspects of the brain and address questions that are raised at different levels of abstraction.
However, there is also a harm associated with tools that is unfortunately entangled with their benefits: They break things down to very small pieces and sometimes destroy the connection between those pieces so it becomes almost impossible to see the big picture. And we, humans, cannot possibly understand a system unless we can see it as a whole and carry a story about it in our mind. Tools give us not only useful but useless information, which can make a mess in our overall understanding of a system, simply because they shed light not only on key but also non-key elements. For an overall understanding to take place, we need to have a story where the key players are known and their actions and interactions with each other form main scenarios. For that story to take place we need to have a hypothesis and for that hypothesis to be proven right or wrong we then need to employ techniques. There is no point in using techniques just to answer questions that we can answer using those techniques, if we do not have a story and not a hypothesis.
As correctly said by Thomas Insel: “We know much less about the brain than any other organ, and yet brain disorders, from autism to Alzheimer’s, are increasing in prevalence, creating a national public health crisis. Recognizing both the urgency and the complexity, the BRAIN report calls for a broad approach, involving a $4.5 billion investment over 10 years beginning in fiscal year 2016, to decode the language of the brain by understanding its circuits.”
Reading this I would have a concern though: what are the original hypotheses that we are testing about the brain given all these resources? What are the stories we are so eager to falsify? What are the scenarios? What if we spend all our resources answering “questions that we can” instead of “questions that we should”? What if we completely get lost in the huge amount of data that is going to be generated by such technology-driven brain research? What if even by gathering all that data we never get to embrace the complexity of the human brain?
Maybe before taking any step forward, we need to take a few steps back, re-think the problem of the brain from the beginning and re-evaluate the path we are taking and why we are taking it. Maybe it is time to free our concepts from the control of tools and technologies and instead employ tools and technologies to falsify our fresh hypotheses and novel concepts?