Thursday, September 25, 2014

Will science end after the last experiment will be performed?

Science is supposed to work like this: you make a theory which explains the experimental data collected up to this point, but also proposes new experiments, and predicts the results. If the experiment doesn't reject your theory, you are allowed to keep it (for a while).

I agree with this. On the other hand, much of the progress in science is not done like this, and we can look back in history and see.

Now, to be fair, making testable predictions is something really excellent, without which there would be no science. To paraphrase Churchill,

Scientific method is the worst form of conducting science, except for all the others.
I am completely for experiments, and I think we should never stop testing our theories. On the other hand, we should not be extremists about making predictions. Science advances in the absence of new experiments too.

For example, Newton had access to a lot of data already collected by his predecessors, and sorted by Kepler, Galileo, and others. Newton came with the law of universal attraction, which applies to how planets move, in conformity with Kepler's laws, but also to how bodies fall on earth. His equation allowed him to calculate from one case the gravitational constant, but then, this applied to all other data. Of course, later experiments were performed, and they confirmed Newton's law. But his theory was already science, before these experiments were performed. Why? Because his single formula gave the quantitative and qualitative descriptions of a huge amount of data, like the movements of planets and earth gravity.

Once Newton guessed the inverse square law, and checked its validity (on paper) on the data about the motion of a planet and on the data about several projectiles, he was sure that it will work for other planets, comets, etc. And he was right (up to a point, of course, corrected by general relativity, but that's a different story). For him, checking his formula for a new planet was like a new experiment, only that the data was already collected by Tycho Brahe, and already analyzed by Kepler.

Assuming that this data was not available, and it was only later collected, would this mean that Newton's theory would have been more justified? I don't really think so. From his viewpoint, just checking the new cases, already known, was a corroboration of his law. Because he could not come up with his formula from all the data available. He started with one or two cases, then guessed it, then checked with the others. The data for the other cases was already available, but it could very well be obtained later, by new observations or experiments.

New experiments and observations that were performed after that were just redundant.

Now, think at special relativity. By the work of Lorentz, PoincarĂ©, Einstein and others, the incompatibility between the way electromagnetic fields and waves transform when one changes the reference  frame, and how were they expected to transform by the formulae known from classical mechanics, was resolved. The old transformations of Galileo were replaced by the new ones of Lorentz and PoincarĂ©. As a bonus, mass, energy and momentum became unified, electric and magnetic fields became unified, and several known phenomena gained a better and simpler explanation. Of course, new predictions were also made, and they served as new reasons to prefer special relativity over classical mechanics. But assuming these predictions were not made, or not verified, or were already known, how would this make special relativity less scientific? This theory already explained in a unified way various apparently disconnected phenomena which were already known.

One said that Maxwell unified the electric and magnetic fields with his equations. While I agree with this, the unification became even better understood in the context of special relativity. There, it became clear that the electric and magnetic fields are just part of a four-dimensional tensor $F$. The magnetic field corresponds to the spatial components $F_{xy}$, $F_{yz}$, $F_{zx}$, and the electric field to the mixed, spatial and temporal, components $F_{tx}$, $F_{ty}$, $F_{tz}$ of that tensor. Scalar and vector potentials turned out to be unified in a four-dimensional vector potential. Moreover, the unification became clearer when the differential form of Maxwell's equations was found, and even clearer when the gauge theory formulation was discovered. These are simple conceptual jumps, but they are science. And if they were also accompanied by empirical predictions which were confirmed, even better.

Suppose for a moment that we live in an Euclidean world. Say that we performed experiments and tested the axioms of Euclid. Then, we keep performing experiments to test various propositions that result from these axioms. Would this make any sense? Yes, but not as much as it is usually implied. They already are bound to be true by logic, because they are deduced from the axioms, which are already tested. So, why bother to make more and more experiments, to test various theorems in Euclidean geometry? This would be silly. Unless we want to check by this that the theorems were correctly proven.

On the other hand, in physics, a lot of experiments are performed, to test various predictions of quantum mechanics or special relativity, or of the standard model of particle physics, which follow logically and necessarily from the postulates which are already tested decades ago. This should be done, one should never say "no more tests". But on the other hand, this gives us the feeling that we are doing new science, because we are told that science without experiment is not science. And we are just checking the same principles over and over again.

Imagine a world where all possibly conceivable experiments were done. Suppose we even know some formulae that tell us what experimental data we would obtain, if we would do again any of these experiments. Would this mean that science reached its end, and there is nothing more to be done?

Obviously it doesn't mean this. We can systematize the data. Tycho Brahe's tables were not the final word in the astronomy of our solar system. They could be systematize by Kepler, and then, Kepler's laws could be obtained as corollaries by Newton. Of course, Kepler's laws have more content that Brahe's tables, because they would apply also to new planets, and new planetary systems. Newton's theory of gravity does more than Kepler's laws, and Einstein's general relativity does more than Newton's gravity. But, such predictions were out of our reach at that time. Even assuming that Tycho Brahe had the means to make tables for all planets in the universe, this would not make Kepler's laws less scientific.

Assuming that we have all the data about the universe, science can continue to advance, to systematize, to compress this data in more general laws. To compress the data better, the laws have to be as universal as possible, as unified as possible. And this is still science. Understanding that Maxwell's four equations (two scalar and two vectorial) can be written as only two, $d F = 0$ and $\delta F = J$ (or even one, $(d + \delta)F=J$), is scientific progress, because it tells us more than we previously knew about this.

But there is also another reason not to consider that science without experiments is dead. The idea that any theory should offer the means to be tested is misguided. Of course, it is preferred, but why would Nature give us the mean to check any truth about Her? Isn't this belief a bit anthropocentric?

Another reason to not be extremist about predictions is the following. Researchers try to find better explanation of known phenomena. But because they don't want they claims to appear unscientific, they try to come up with experiments, even if it is not the case. For example, you may want to find a better interpretation of quantum mechanics, but how would you test it? Hidden variables stay hidden, alternative worlds remain alternative, if you believe measurement changes the past, you can't go back in time and see it changed without actually measuring it etc. It is like quantum mechanics is protected by a spell against various interpretations. But, should we reject an alternative explanation of quantum phenomena, because it doesn't make predictions that are different from the standard quantum formalism? No, so instead of calling them "alternative theories", we call them "interpretations". If there is no testable difference, they are just interpretations or reconstructions.

A couple of months ago, the physics blogosphere debated about post-empirical science. This debate was ignited by a book by Richard Dawid, named String Theory and the Scientific Method, and an interview. His position seemed to be that, although there are no accessible means to test string theory, it still is science. Well, I did not write this blog to defend string theory. I think it has, at this time, bigger problems that the absence of means to test what happens at Plank scale. It predicts things that were not found, like supersymmetric particles, non-positive cosmological constant, huge masses for particles, and it fails to reproduce the standard model of particle physics. Maybe these will be solved, but I am not interested about string theory here. I am just interested in post-empirical science. And while string theory may be a good example that post-empirical science is useful, I don't want to take advantage of the trouble in which this theory is now.

The idea that science will continue to exist after we will exhaust all experiments, which I am not sure describes fairly the real position of Richard Dawid, was severely criticized, for example in Backreaction: Post-empirical science is an oxymoron. And the author of that article, Bee, is indeed serious about experiment. For example, she entertains a superdeterministic interpretation of quantum mechanics. I think this is fine, given that my own view can be seen as superdeterministic. In fact, if you want to reject faster-than-light communication, you have to accept superdeterminism, but this is another story. The point is that you can't make an experiment to distinguish between standard quantum mechanics, and a superdeterministic interpretation, because that interpretation came from the same data as the standard one. Well, you can't in general, but for a particular type of superdeterministic theory, you can. So Bee has an experiment, which is relevant only if the superdeterministic theory is such that making a measurement A, then another one B, and then repeating A, will give the same result whenever you measure A, even if A and B are incompatible. Now, any quantum mechanics book which discusses sequences of spin measurements claims the opposite. So this is a strong prediction, indeed. But how could we test superdeterminism, if it is not like this? Why would Nature choose a superdeterministic mechanism behind quantum mechanics, in this very special way, only to be testable? As if Nature tries to be nice with us, and gives us only puzzles that we can solve.

2 comments:

Florin Moldoveanu said...

The blessing/curse of physics is that it has to describe reality.

In the absence of new experiments, there are a huge bunch of half-baked "model theories" predicting unicorns whose existence is well hidden by fine tuning of model parameters.

Smolin was right, physics is in trouble. Deep structural trouble. When the last experiment will be performed, just like in H.G. Well's "The Time Machine" story, theoretical physicists will become the ineffectual Eloi.

Cristi Stoica said...

Florin, I expect that the strong statements I made here will be rejected on the spot, but let me show you my arguments. To make what I said more specific to your own domain of expertise, (and also to test the theory about science I sketched here) I have two questions:

1. When was performed the last relevant experiment in the foundations of quantum mechanics? I mean, an experiment that tests new principles, and not that just verifies again and again the same postulates of QM. These were verified already many decades ago. In my humble opinion, ever since, we are just closing very unlikely loopholes, make experiments more complex, increase the distances between entangled particles, or test quantum engineering consequences. Even things like quantum teleportation are just consequences of QM as it is known for almost 90 years.

2. In how many cases can the reconstruction of QM be tested? How would you be able for example to falsify the idea that QM is a consequence of the invariance under tensor composition? This may work fine mathematically, but what experiment would distinguish your reconstruction among the others? If it can't be tested, would you consider that this is not science?

I attended earlier this year a conference where reconstruction of QM was very well represented. I remember that, whenever one proposed principles which made distinct predictions from QM, for example different correlations, one tried to adjust them. Apparently to be like those of QM, which implicitly is considered to correspond to reality.

There are so many physicists, how many of them get the chance to make an experimentum crucis?

My own view is that now research in the foundations of QM not only continues to be done, but the number of papers increased dramatically. Yet, I think that for many decades foundations of QM are in the post-empirical stage, or at least on a break (from experiments). But I don't think this means that foundations of QM are not science.