Misunderstanding the Progression of Big Science

In an interesting article about large-scale science projects (aka ‘Big Science’), Tim Requarth wrote this, which seems to have received a lot of support in the intertoobz (or at least the Twitterz; boldface mine):

But here may be the real conclusion from looking at the HBP and the BRAIN Initiative: Neither should be called “big science.” That’s just rhetoric. If you think about it, building a particle accelerator, sequencing the human genome, or putting a man on the moon are actually not science projects: They are engineering projects. Building a particle accelerator, for example, involves a set of engineering issues—a huge series of very complicated engineering issues. But while the accelerator may help discover lots of new particle physics, there is no new physics needed to build it. Likewise with the Human Genome Project: Determine the sequence of about a three billion base pairs in DNA and you’re done. This requires a lot of clever approaches for reading gene sequences and time-intensive bench work but no new molecular biology. And putting a man on the moon, likewise, involved new rockets, navigation, and life-support systems but no new physics.

In fact, there’s actually no such thing as big science; we should really be calling it big engineering. Scientific disciplines are ready for big engineering once they have a solid, well-established theoretical structure. At the time of the Human Genome Project, scientists agreed about the theory, the “central dogma,” of how genetic information is contained in the sequence of letters in our DNA and how that information is turned into action. Then they joined in a massive engineering effort to unravel the specific sequence of letters inside human cells.

I think this is completely backwards: the first time you do big science, it’s an engineering project. The second time, it’s science–you begin to test hypotheses. The first time someone sequenced a large number of bacterial genomes, it was largely a technological challenge (we also didn’t know sorts of questions would emerge, but more about that in a bit). But once we became pretty good at it–’the machine was running’–we started testing interesting hypotheses and ‘doing science.’

I also think the claimed progression from theory to engineering misses two things. First, Big Science spurs new theoretical and analytical developments since the scale of the data (e.g., sample sizes) often renders previous ‘small data’ techniques obsolete (or at least irrelevant). Second, Big Science can yield data that you simply had no chance of collecting previously. An older colleague once asked me if microbial genomics would just be larger datasets, “what I’ve been doing for thirty years, just bigger.” Ultimately, my response came in the form of a thirty minute talk, in which I argued that phenomena like genome organization, along with the detail of resolution provided by genomics lead to (and answered) questions that were different in kind.

To use the human genome example Requarth provides, the first human genome was largely a logistic and infrastructure exercise. But sequencing 1,000 human genomes (or many more than that), while still technologically challenging, also enables to test hypotheses. Of course, like any science project, Big Science can be designed stupidly, but that has little to do with scale.

This entry was posted in Genomics. Bookmark the permalink.