Wednesday 16 October 2013

Robust science

As a part of my book project on complexity I have been reading William Wimsatt's 'Re-engineering philosophy for limited beings', which in essence is a subset of his papers published in the last 30 years merged into a coherent whole.



The main purpose of the book is to introduce a new philosophy of science, which accounts for our limitations as human beings. Traditionally philosophy of science has assumed that scientists are perfect beings, having infinite computational power and never making any mistakes, and Wimsatt's aim is to replace this view with one in which scientists are fallible and error-prone. Now if this is the case, how do we formulate a scientific method that accounts for and embraces these limitations?

The greater part of the book is devoted to answering this and related questions, and I will here only mention one aspect that I found particularly intriguing.

The traditional account of a scientific theory is a set of assumptions or axioms together with some rules of deduction that dictate how novel and true statements can be produced. Assuming that the axioms are true we can generate a possibly infinite set of true statements all connected somehow by the rules of deduction. The picture is that of a network, where true statements are nodes and deductions form the links.

In reality however scientific statements are rarely held together by truth preserving rules of inference, but rather by experimental data, hand-waving analytical results and results from models. All of these may contain flaws, which run the risk of undermining the theory. If an experimental result turn out to be wrong for some reason, then the corresponding link in the network breaks, and if the link points to a statement with only one link, then that statement has to go.

How then should we deal with this situation? Wimsatt's answer is that we already are dealing with it by making robust inferences. In general we don't trust the results of single model, but instead require some independent verification. And if two different models provide the same answer then it's more likely to be accurate. And the more links that are pointing towards a statement the more likely it is to be true. Even if some of the experimental results or conclusions made from models turn out to be flawed the statement still stands.

I think this network analogy is useful way of illustrating the how scientific knowledge is accumulated and has certainly helped me in thinking about my work.



No comments: