Research in the physical and biological sciences has traditionally used an invasive reductionist approach. In other words, the systems under study are dissected into their component parts, the parts are individually studied and an attempt is made to understand the whole as the sum of its parts.
The technical method of study, simply described as "the experiment", usually perturbs the object of study in some way and the consequences of this perturbation are noted. This reductionist and experimental approach has been, and continues to be, very successful, but it will be increasingly important in the future to supplement it with a more holistic and non-invasive approach.
Francis Bacon, the 17th-century philosopher, advocated intervention in nature - experimentation - in order to understand it. The founding fathers of modern science, including the anatomist William Harvey (1578-1657), who discovered the circulation of blood, Johannes Kepler (1571-1630), who discovered the laws of planetary motion, and Galileo Galilei (1564-1642), who first studied the heavens by telescope and initiated the general method of science, adopted the reductionist experimental approach with great success, and so it has continued to this day.
Even in Bacon's time there were dissenting voices. Some people felt the appropriate way to study nature was to observe it respectfully from a distance. They felt that to interfere and to carry out experimental manipulations could only produce a distorted picture. Probably the most forceful critic of the Baconian philosophy was Johann Wolfgang von Goethe (1749-1832), who was born 123 years after Bacon's death.
Goethe was particularly interested in morphology, the systematic study of forms as they are found unaltered in nature. The method he advocated was contemplation, and his aim was to achieve synthesis. He couldn't believe in a method for gaining knowledge that relied on taking things apart.
Few scientists were persuaded by Bacon's critics. This was most fortunate. Nature is far too complex to approach it de novo, as the early scientists had to do, and to hope to make significant progress using a contemplative, non-interventionist method. Science would have ground to a halt using this approach. History shows that, so far, science has progressed in direct proportion to the extent it intervened experimentally in nature.
However, we are probably now entering an era where an exclusive reliance on the reductionist method will yield diminishing returns.
The limits of the reductionist approach have been discernible for some time. For example, in biochemistry you can learn a lot about the chemical functioning of an organ by removing it from the body and studying it in isolation. However, in most cases the finer details of how the organ functions in the body are too subtle and complex to be worked out on an isolated organ. You must study it in situ in a functioning whole. Again, in zoology, you can conveniently study animal behaviour in a zoo. However, this behaviour, although valuable to study, differs in important ways from behaviour in the wild. Even physics, the queen of reductionism, has come up against seemingly impenetrable barriers in indefinitely extending observations at the super-small level.
The reductionist approach works best when a whole system approximates in some rough way to the sum of its parts. However, biological systems are problematic in this respect. Here we have a hierarchy of organisation - for example, from the top down we have communities of animals, individual animals, systems of organs, individual organs containing different tissues, individual tissues, cells, sub-cellular organelles, and basic biomolecules. Usually, when a number of components at one level co-operate to form a unit of organisation at a higher level, new properties emerge that cannot be predicted from the known properties of the components at the simpler level.
Again, in physics the reductionist approach has run into limits at the level of the very small, beyond which it cannot pass. Consider the uncertainty principle, first proposed in the 1920s. This states that it is impossible to know both the position and velocity of an atomic particle. If you locate the position of the particle you destroy all knowledge of its speed. This is quite unlike the macro world that we are all familiar with - it is simple to calculate simultaneously the position and speed of a motorcar.
The uncertainty principle is a consequence of the act of measurement. In order to measure the speed and/or position of an object you must detect the object by bouncing light off it. Bouncing light off a large object makes no impression either on the object's location or speed. However, bouncing light off an atomic particle gives it quite a jolt, changing both its speed and position. So, if you detect its position you jolt it into a new and unpredictable speed. (I have also read another theory, that the uncertainty principle is not a consequence of measurement. It maintains that in the strange world of the very small no particle has a precise speed and a precise position at the same time.)
Einstein's theory of general relativity predicts gravitational waves that interact with matter, causing it to vibrate. However, the vibrations would be extremely tiny. If they could be measured they would confirm the existence of gravitational waves, but for many years the uncertainty principle has prevented experimental detection of the tiny vibrations. As far as I am aware, this still pertains, although physicists feel certain that gravitational waves exist from certain observations of neutron stars.
The reductionist experimental approach continues to be very powerful. However, it has its limitations and these are becoming increasingly obvious. To go to the next level of sophistication in science, we need to develop methods that can investigate whole working systems.
William Reville is a senior lecturer in biochemistry and director of microscopy at UCC