Science begins with evaluations of what might be true, given the existing evidence1 (a hypothesis)2,3. The implications of this idea are then compared to our existing knowledge, to see how well it fits4. Then, tests are devised to see if the new idea's predictions will match experimental results. If a hypothesis cannot be tested, then, it is not scientific5,6,7,8. If the hypothesis fails, then, it is wrong7. If it passes, then, it becomes (or supports) a theory9. All science, no matter how durable, remains a theory until proven wrong2. This way, science comprises of a continual series of adjustments and improvements to theories as they are adapted to fit new evidence. Theories that cannot be adjusted are replaced by theories that fit the evidence better.
The "Scientific Method" is a set of steps taken to ensure that conclusions are reached sensibly, experiments designed carefully, data is interpreted in accordance with the results of tests, and that procedures can be verified independently. The system is designed to reduce as much Human error and bias as possible10. Ideas and theories must be subject to criticism, and counter-evidence must be taken into account in order to produce new and more accurate theories11. Everything should be questioned. Most people cannot "do" science and do not have the skills to analyse data in an adequate manner12. The Scientific Method is hard and demanding, with high standards of ethical conduct expected - Daniel C. Dennett wrote that "good intentions and inspiration are simply not enough" (2007)13. The effects of science can impact on all human development, changing entire societies14. Science has been responsible for a staggering increase in human knowledge, human technology and human capabilities over the last few centuries.15
Science begins with evaluations of what might be true, given the existing evidence1. These evaluations are called hypotheses1,3 (plural). The implications of a new idea (a hypothesis) is then compared to our existing knowledge, to see how well it fits - E. O. Wilson describes this as the theory's consilience with our existing body of knowledge4. If it disagrees with multiple theories, especially long-standing ones, then it is very likely to be wrong and will be avoided by most scientists.
Often, a new hypothesis comes about as a result of anomalies or oddities discovered inadvertently during experiments or observations.
“Science starts, not from large assumptions but from particular facts discovered by observation or experiment. From a number of such facts a general rule is arrived at, of which, if it is true, the facts in questions are instances. This rule is not positively asserted, but is accepted, to begin with, as a working hypothesis. If it is correct, certain hitherto unobserved phenomenon will take place in certain circumstances. If it is found that they do take place, that so far confirms the hypothesis; if they do not, the hypothesis must be discarded and a new one must be invented.”
Richard Feynman, a Nobel-prize winning scientist, said:
“In general, we look for a new law by the following process: First we guess it; then we compute the consequences of the guess to the result of the computation to nature, with experiment or experience [observation of the world], compare it directly with observation, to see if it works. If it disagrees with experiment, it is wrong.”
A hypothesis becomes a theory when it is tested experimentally without being falsified16 and it feeds into a larger logical framework. It can also gain support by making concrete predictions about the future, more accurately and more succinctly than other competing theories. If a hypothesis fails its tests, then, it is often rescued by making modifications and adjustments, and is then subjected to new tests.
The astrophysicist John Gribbin tells the story of the Big Bang Theory to illustrate this:
“The weight of evidence tilted dramatically in favour of the Big Bang model in the mid-1960s, when the American astronomers Arno Penzias and Robert Wilson, testing a new radio telescope at Holmdel, New Jersey, discovered a weak hiss of radio noise, with a temperature just under 3 K, coming from all directions in space. They had no idea what it was, but it was quickly explained by theorists working at nearby Princeton University as leftover radiation from the Big Bang. And only then did everyone involved discover that this radiation had been predicted, two decades earlier, by Alpher and Herman. Nevertheless, in spite of the roundabout route, the story provides an almost perfect example of the scientific method at work. An idea, the Big Bang model, predicts a property of the Universe that has never been seen, and measurements then show that the Universe does have that property. So we can pinpoint 1965, the year the discovery of the background radiation was published, as the moment when the Big Bang model became elevated to the status of a theory – the best theory we have of how the Universe began.”
Theories and hypotheses, must be disprovable. They must make it clear exactly what criteria would falsify them, and therefore, must be testable5,17. Richard Dawkins defines all of science in terms of its testability: science is, he says, "defined as the set of practices which submit themselves to the ordeal of being tested"18.
The academic Karl Popper, is often cited as being the source of this requirement and it has become one of the most well-known 'rules' of scientific methodology. Karl Popper proclaimed the principal in Logik der Forschung in 1934, published in Vienna. He translated it into English as The Logic of Scientific Discovery in 1959, published in London. Professor Victor Stenger points out that Popper and Rudolf Carnap explored the same idea, Carnap in "Testability and Meaning" in Philosophy of Science (1936)19, therefore it appears that Popper is given undue credence as the sole purveyor of the idea by academics. However the science historian Patricia Fara states that Popper first voiced his falsification criteria as long ago as 1919 after observing a lecture by Einstein20. No matter the history, it is now a very well-established principal.
“Falsification [is] the demarcation criterion proposed [...] as a means for distinguishable legitimate scientific models from nonscientific conjectures. [...] While failure to pass a required test is sufficient to falsify a model, the passing of the test is not sufficient to verify the model. This is because we have no way of knowing a priori that other, competing models might be found someday that lead to the same empirical consequences as the one tested.”
"God, the Failed Hypothesis: How Science Shows That God Does Not Exist"
Prof. Victor J. Stenger (2007)6
“A hypothesis must be a proposed explanation that can be tested. The most straightforward approach to such testing in science is to perform an experiment. If the experiment is conducted properly, its results either will agree with the predictions of the hypothesis or they will contradict it. [...] The more experiments that agree with the hypothesis, the more likely we are to accept the hypothesis as a useful description of nature.”
Imagine a game of hangman, where a person must guess what word is being revealed but can only see so many of the word's letters. With the evidence available, the person can guess a word - this is his hypothesis. The criteria by which he can be affirmed or proven wrong is through the revealing of new evidence. If a letter is revealed that does not fit his theory then the idea must instantly be discarded or adapted. So in science (where the world is almost infinitely complex), theories are much easier to deny than to ultimately confirm. To say that a theory is true you must wait until the very end of the game, until every letter is revealed. The only problem is, as new facts are continually discovered, it is hard to be sure that any future evidence won't suddenly falsify the theory; this is why some hold that all scientific models will always remain theories. To abandon this concept is to try to stop the flow of new discoveries!
There is a well-established rule for hypotheses that contradict existing theories; the more of our existing knowledge it goes against, the more extra-ordinary the hypothesis is.
“"Extraordinary claims require extraordinary evidence" was a phrase made popular by Carl Sagan [in the 1980s]. Its roots are much older, however, with the French mathematician Laplace stating that: "The weight of evidence for an extraordinary claim must be proportioned to its strangeness". Also, David Hume wrote in 1748: "A wise man ... proportions his belief to the evidence", and "No testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous than the fact which it endeavors to establish".”
Such radical hypotheses will need to be repeatedly proven, over and over, in front of various audiences, will need to pass independent verifications by independent groups, just the same as other hypotheses must. The exception is that in order displace existing theories, a new idea must be provably better, when tested. It is very rare that radically new and unexpected ideas manage to do this - in reality, most ideas that contradict a bulk of existing scientific knowledge are not only wrong, but misguided.
Because science thrives on new ideas, nonetheless, even crazy ideas are subjected to critical analysis and testing; the body of academics that engages most readily with such ideas are the skeptics, who are still willing to publicly engage in debating the craziest of ideas.
The building-block of science is the theory. New data results in new theories, and theories inspire experiments which are designed to test them... resulting in new data, which may then require new theories. This cyclic process propels science forwards. Any new theory must displace an old one, and each new theory therefore needs abundant evidence in its favour. No-one will abandon the standing theory without good reason.
“New theories are first of all necessary when we encounter new facts which cannot be "explained" by existing theories.”
The best thing about theories is that when new evidence comes to light, new theories arise to replace or modify the old ones.
You might notice that the theory is king: data without a supporting theory is all but useless. It can even be dangerous: If data leads a researcher to claim some radical new element of cause and effect, then, there has to be a valid underlying theoretical framework in addition to the data24. The lack of good theory can lead people far 'down the garden path', i.e., to false conclusions, and to have undue confidence in the data and their own interpretation of it.
A common criticism of the theories of evolution and of the big bang is that "they are only theories". However, many people misunderstand what the word "theory" means25. A scientific theory that explains the facts well is accepted; whereas one that doesn't is rejected. That something "is only a theory" does not affect whether it is accurate or not - a theory is not easy to dismiss25 unless it makes untrue predictions. The Theory of Gravity is "only a theory", as is the heliocentric theory of our solar system (i.e., that we all orbit the sun). A hypothesis supports a theory by passing particular tests, and a theory remains a theory until it comes to disagree with evidence. Then it is a failed theory.
“However many facts are found to fit the hypothesis, that does not make it certain, although in the end it may come to be thought of in a high degree probable; in that case, it is called a theory rather than a hypothesis.”
"Science is nothing without experimentation"7. But how do you test a hypothesis? Using the scientific method. The steps of the testing process, such as peer review and independent verification, are fundamental parts of how you take a serious and careful approach to truth. Every stage is designed to spot errors so that the original hypothesis can be altered.
When test results are negative and theories are undermined by evidence, then, they must be improved or replaced.