What is Science and the Scientific Method?

#epistemology #science #truth

Subjective opinions and feelings obscure our reason and our ability to describe the world accurately. When we attempt to deduce objective facts backed up by evidence, and proceed to test our theories rationality and adapt ideas to fit new evidence, it is called the scientific method (although it is more complicated than the description I just gave here!). It is the best way of determining what is true because it eliminates personal opinion. Daniel C. Dennett wrote that "good intentions and inspiration are simply not enough" (2007)1 - the scientific method is hard and demanding, with high standards of ethical conduct expected, too.


1. Finding Out What is True

Book CoverOne key feature typifies the specialist: the scientific method that is used to objectify theories about the world. Ordinary people up to and including government officials and ministers, are all as a whole quite poor at analysing facts in an adequate manner - most people are poor at science2. Qualified scientists and those educated in related epistemological fields such as philosophy, some social sciences and social psychology, are normally the experts in spotting errors in human thinking, and in combating these errors through the use of science. Theories, evidence and statistics combine to produce "objective" knowledge. Scientific "theories" are devices used to understand the world and predict outcomes. Theories and studies are submitted to journals for peer review and verified by other independent specialists. An important aspect of scientific theories is that they must give way in the future to more accurate theories. The nature of the scientific method lends itself to an acceptance of a continual change in knowledge. Richard Dawkins points out that scientists frequently admit when their theories have been superseded or corrected by their peers, science is "defined as the set of practices which submit themselves to the ordeal of being tested"3. The scientific method is therefore inherently revolutionary.

Richard Gross opens his prominent book "Psychology: The Science of Mind and Behaviour" (1996), with some chapters on science, and offers the following as two major steps in scientific theory:

  1. Theory Construction, "an attempt to explain observed phenomena".
  2. Hypothesis Testing, involving "making specific predictions about behaviour under certain specified conditions".

Theories must be capable of making testable predictions. Two competing theories will often explain the behaviour of something after observations, but this is easy. The hard part is seeing which theory most accurately predicts what will happen in future observations.

One of my favourite definitions of science is that of E. O. Wilson:

Book CoverScience, to puts its warrant as concisely as possible, is the organized, systematic enterprise that gathers knowledge about the world and condenses the knowledge into testable laws and principles. The diagnostic features of science that distinguish it from pseudoscience are first, repeatability: The same phenomenon is sought again, preferable by independent investigation, and the interpretation given to it is confirmed or discarded by means of novel analysis and experimentation. Second, economy: Scientists attempt to abstract the information into the form that is both simplest and aesthetically most pleasing - the combination called elegance - while yielding the largest amount of information with the least amount of effort. Third mensuration: If something can be properly measured, using universally accepted scales, generalizations about it are rendered unambiguous. Fourth, heuristics: The best science stimulates further discovery. Fifth and finally, consilience: The explanations of different phenomena most likely to survive are those that can be connected and proved consistent with one another.

"Consilience: The Unity of Knowledge" by E. O. Wilson (1998)4

2. The Scientific Method

2.1. New Theories and New Facts (Only a Theory?)

The building-block of science is the theory. New data results in new theories, and theories inspire experiments which are designed to test them... resulting in new data, which may then require new theories. This cyclic process propels science forwards. Any new theory must displace an old one, and therefore needs abundant evidence in its favour; no-one will abandon the standing theory without good reason.

New theories are first of all necessary when we encounter new facts which cannot be "explained" by existing theories.

"Ideas and Opinions" by Albert Einstein (1950)5

The best thing about theories is that when new evidence comes to light, new theories arise to replace or modify the old ones. Bertrand Russell states, "theories, if they are important, can generally be revived in a new form after being refuted as originally stated. Refutations [...] in most cases [are] only a prelude to further refinements"6. Some theories however, are unsalvageable and are completely abandoned. This way, science continues to explain reality as accurately as it can. Theories that deny that new theories could replace them - such as those that religious conservatives propound, are deluded. In the search for truth, it is essential to dogmatically stick to the assumption that whatever you think you know could actually be wrong. In that sense, the only correct way to search for truth is to know that everything is a theory, and nothing is absolute fact. In this way, human error is most readily corrected.

Russell (1935) explains how science begins from initial observations and continually builds until major theories are brought to general acceptance through long periods of practical trial and error.

Book CoverScience starts, not from large assumptions but from particular facts discovered by observation or experiment. From a number of such facts a general rule is arrived at, of which, if it is true, the facts in questions are instances. This rule is not positively asserted, but is accepted, to begin with, as a working hypothesis. If it is correct, certain hitherto unobserved phenomenon will take place in certain circumstances. If it is found that they do take place, that so far confirms the hypothesis; if they do not, the hypothesis must be discarded and a new one must be invented. However many facts are found to fit the hypothesis, that does not make it certain, although in the end it may come to be thought of in a high degree probable; in that case, it is called a theory rather than a hypothesis.

"Religion and Science" by Bertrand Russell (1935)7

You might notice that the theory is king: data without a supporting theory is all but useless. It can even be dangerous: If data leads a researcher to claim some radical new element of cause and effect, then, there has to be a valid underlying theoretical framework in addition to the data8. The lack of good theory can lead people far 'down the garden path', i.e., to false conclusions, and to have undue confidence in the data and their own interpretation of it.

Today the theory of evolution is about as much open to doubt as the theory that the earth goes round the sun.

"The Selfish Gene"
Prof. Richard Dawkins (1976)9

Only a Theory? A common criticism of the theories of evolution and of the big bang is that "they are only theories". However, they misunderstand what the word "theory" means. A scientific theory that explains the facts well is accepted; whereas one that doesn't is rejected. That something "is only a theory" does not affect whether it is accurate or not. Some example theories include the theory of gravity, and the theory that the Earth orbits the Sun. Clearly, the evidence is the important aspect of any theory!

2.2. Falsification: All Theories Must be Testable

Theories must be disprovable. A theory must make it clear exactly what criteria would falsify it, and therefore, the theory must be testable10. The academic Karl Popper, is often cited as being the source of this requirement and it has become one of the most well-known 'rules' of scientific methodology. Karl Popper proclaimed the principal in Logik der Forschung in 1934, published in Vienna. He translated it into English as The Logic of Scientific Discovery in 1959, published in London. Professor Victor Stenger points out that Popper and Rudolf Carnap explored the same idea, Carnap in "Testability and Meaning" in Philosophy of Science (1936)11, therefore it appears that Popper is given undue credence as the sole purveyor of the idea by academics. However the science historian Patricia Fara states that Popper first voiced his falsification criteria as long ago as 1919 after observing a lecture by Einstein12. No matter the history, it is now a very well established principal.

Falsification [is] the demarcation criterion proposed [...] as a means for distinguishable legitimate scientific models from nonscientific conjectures. [...] While failure to pass a required test is sufficient to falsify a model, the passing of the test is not sufficient to verify the model. This is because we have no way of knowing a priori that other, competing models might be found someday that lead to the same empirical consequences as the one tested.

Often in science, models that fail some empirical test are modified in ways that enable them to pass the test on a second or third try. While some philosophers claimed this shows that falsification does not happen in practice, the modified model can be regarded as a new model and the old version was still falsified. I saw many proposed models falsified during my forty-year research career in elementary particle physics and astrophysics; it does happen in practice.

"God, the Failed Hypothesis: How Science Shows That God Does Not Exist"
Prof. Victor J. Stenger (2007)13

Imagine a game of hangman, where a person must guess what word is being revealed but can only see so many of the word's letters. With the evidence available, the person can guess a word - this is his theory. The criteria by which he can be affirmed or proven wrong is through the revealing of new evidence. If a letter is revealed that does not fit his theory then the theory must instantly be discarded. So in science (where the world is almost infinitely complex), theories are much easier to deny than to ultimately confirm. To say that a theory is true you must wait until the very end of the game, until every letter is revealed. The only problem is, as new facts are continually discovered, it is hard to be sure that any future evidence won't suddenly falsify the theory; this is why some hold that all scientific models will always remain theories. To abandon this concept is to try to stop the flow of new discoveries!

2.3. Peer Review14

Peer-review is an important part of the scientific method. It is naive to believe that scientists act without passion, subconscious bias or social influences when they conduct studies - and this shortcoming is admitted firstly by scientists themselves15. So scientific publications are sent to a number of recognized experts in appropriate fields for review. The publishing journal will wait for the results and feedback from those experts, and decide whether they want to publish the paper or not. Some get published straight away. Others might be sent back to their authors with the scientific concerns of the experts put forth, and the journal will wait for an edited version to be resubmitted, before (probably) sending it off for peer review again. Some studies are found to be "fatally flawed" and so never get published.

Likewise, some papers can be removed from the original publication even years after the journal was printed, such as where a study is later on found to be flawed, completely erroneous, or to be fraudulent. Sometimes, things such as undisclosed funding can cause an article to be withdrawn, such as when a scientist is secretly paid by an industry body to produce favourable "science" to support the industry in question. Such tactics have been employed by oil, tobacco, drinks, alternative therapies and lobby groups for other industries that are responding to criticism from governments and scientists by attempting to "buy" scientific credence for their activities. Being removed from previous publications is a serious indication that something was wrong with the study.

Peer-review will look at the methods used in the research, the strength of the statistical analysis, whether proper care is taken with the wording of the conclusion, whether the blurb does indeed reflect what the data shows, etc. The idea is not to publish misleading or faulty papers. This quality-control boosts both the quality of the publication itself (hence, making the journal more trusted) and aids science in general: to publish in a scientific journal, you have to display the correct care and attention to detail, and to avoid the many pitfalls of bad science and poor methodology.

The main strength of this approach is that the scientific methods involved, and the conclusions, are scrutinized by experts who do not have a vested interested in the quality of the study. It is well-known that those who conduct studies often believe their own stated conclusions and are biased towards seeing their own work in a positive light16, and peer-review is the process used to allow critical evaluation of work from others' point of view. As in all human endeavours, a second set of eyes will often reveal problems that the original author would never spot.

2.4. Reproducibility and Independent Verification of Results

Reproducibility and independent verification are integral parts of the scientific method17. Whether it is research in physics, chemistry or psychology, the results of any experiment must be reproduced independently in another location. This checks that the results were not the result of unintentional but consistent human bias in the original experiments. There have been plenty of cases where a scientist declares results and describes his experiment in a scientific journal but other researchers fail to reproduce in their own experiments. If results cannot be duplicated then the data is not accepted as valid. This is why newspaper reports on single experiments should be heeded with care: any experimenter can claim results but if others around the world cannot verify the procedure then the chances are the experiment was flawed. Results should only be acclaimed once they have been verified and this is why sometimes public announcements are not made for some time, especially with highly technical or long-term experiments. Always think to check who done the original experiments, and who verified the methods.

Book CoverScience requires that a phenomenon be reliably produced in different laboratories for it to be accepted as genuine. Whoever claims to have discovered a phenomenon must describe in sufficient detail how it was produced so that other investigators, following similar steps, can reproduce it themselves. This requirement of replicability applies to all fields of science. [...]

Although the history of science contains numerous examples of an investigator's expectations clouding his or her vision and judgement, the most serious of these abuses are overcome by the discipline's insistence on replicability and the public presentation of results. Findings that rest on a shaky foundation tend not to survive in the intellectual marketplace. [...] The biggest difference between the world of science and everyday life in protecting against erroneous beliefs is that scientists utilize a set of formal procedures to guard against [...] sources of bias and error.

"How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life" by Thomas Gilovich (1991)18

2.5. Occam's Razor: Simplicity & Fewer Assumptions are Better

#atheism #theism

The aim of science is, on the one hand, a comprehension, as complete as possible, of the connection between the sense experiences in their totality, and, on the other hand, the accomplishment of this aim by the use of a minimum of primary concepts and relations.

Albert Einstein (1936)19

A hypothesis has assumptions which must then be backed up by evidence if the idea is to take ground. Clearly, the fewer such assumptions are, the better. In general this has led to a principal in science that the theory with fewest assumption and fewest complicated side-effects is probably a better theory than others. This is commonly called 'Occam's Razor':

Occam is best known for a maxim which is not to be found in his works, but has acquired the name of 'Occam's razor'. This maxim says: 'Entities are not to be multiplied without necessity.' Although he did not say this, he said something which has much the same effect, namely: 'It is vain to do with more what can be done with fewer'. That is to say, if everything in some science can be interpreted without assuming this or that hypothetical entity, there is no ground for assuming it. I have myself found this a most fruitful principal in logical analysis.

"History of Western Philosophy" by Bertrand Russell (1946)20

In philosophical arguments, it is frequently used to mean that if a particular belief or idea leads to the requirement for a massive amount of special explanation, other odd conclusions, and outstanding complexity, then such a belief is probably wrong.

For example, in the theological debate between atheists and theists, both attempt to account for the existence of the universe using similar ideas. Atheists believe that the universe is self-contained and had no preceding 'cause'. Theists believe that the universe was created by God, and that God is self-contained and has no 'cause'. Both theories contain a similar uncaused element, but, the theistic theory contains an additional assumption that the uncaused cause is a god. By employing Occam's razor, many would guess that the simpler, atheistic, theory is more likely to be correct because it contains less unanswered questions (assumptions) than the theist one.

2.6. Double Blinded Trials21

When it comes to testing theories that involve humans, all kinds of psychological factors come into play. These can change the results of experiments, and all good designs will try to minimize indirect effects, so that the primary theory alone is being tested. One such mitigation is blinding - that means, not letting the subjects in a test know which group they are in (i.e., are they in the control group whose main task is just to carry on as normal even if they think they're being made subject to a procedure?). To achieve this, subjects should be randomly placed into the different groups in the experiment, so that the researcher's subconscious biases do not result in the control group being filled with people according to subtle judgements about their personality (etc). So, "double-blinded" trials are those in which the researchers nor the subjects know who is receiving what treatment, and therefore it is impossible for psychological effects to bias the selection process as the experiment proceeds.

Randomisation is not a new idea. It was first proposed in the seventeenth century by John Baptista van Helmont, a Belgian radical who challenged the academics of his day to test their treatments like blood-letting and purging. [...] Does randomisation matter? As with blinding, people have studied the effect of randomisation in huge reviews of large numbers of trials, and found that the ones with dodgy methods of randomisation overestimate treatment effects by 41 per cent. [...] A review of blinding in all kinds of trials of medical drugs [in particular,] found that trials with inadequate blinding exaggerated the benefits of the treatments being studied by 17 per cent.

"Bad Science" by Ben Goldacre (2008)22

Ben Goldacre studied the evidence surrounding acupuncture and found that properly controlled studies show no advantage of acupuncture above the placebo effect, but poorly controlled studies where psychological factors are not properly accounted for end up showing that acupuncture is effective.22. Poor experimental design can have real effects - and in this day and age where most news outlets don't have scientifically qualified staff that examine trials before they are printed, this means that large numbers of people can be easily duped into trusting something that ought to be debunked.

3. The Scientific Method Counteracts Human Error

We human beings, even while forming theories and conducting experiments, only have a limited capacity to think clearly and accurately. Our neural brains have evolved many methods of discerning what is true that are like shortcuts rather than logical analysis. As such, sometimes even trains of thought that seem clear turn out to result from mild self-delusion. It is not just apophenia (recognizing patterns that aren't really there) that can lead theorists astray, but many traits of human nature can belie the search for truth. Apart from cognitive errors in general thought, all humans are also subject to subjective desires, such as the desire to find supporting evidence for ones' own ideas.

Science is a human endeavor and, as such, is prone to bias, error, emotion, greed, hubris, and manipulation. All of these are human traits, and thus each researcher may be guilty of any of these failings. To be sure, these failings are present in every human activity [...]. Because science is largely self-correcting, these failings are usually rectified [...], science weeds out human errors over time in a way that no other major self-regulating institution does.

David Koepsell in Skeptical Inquirer (2006)23

Pattern recognition is the basis of all aesthetic enjoyment, whether it is music, poetry or physics. As we become more sophisticated in what we do, we learn to recognize ever more subtle patterns. Unfortunately, the brain that makes the link between the tides and the phases of the moon may also connect a comet to victory in battle. [...] There's a thin line between recognizing subtle patterns and apophenia, the experience of seeing patterns where none exist. [...] The scientific process, in short, takes account of cracks, shortcomings and changes.

New Scientist (2006)24

Book CoverThere are sound reasons for preferring the data from randomized, double-blind, controlled experiments to the data provided by anecdotes. Even well-educated, highly trained experts are subject to many perceptual, affective, and cognitive biases that lead us into error when evaluating personal experiences. [...] All things being equal, the more impersonal and detached we are in evaluating potential causal events, the less likely error becomes.

"Unnatural Acts: Critical Thinking, Skepticism, and Science Exposed!"
Robert Todd Carroll (2011)25

I have written at length on the psychological causes of mistaken beliefs and conclusions.

"Errors in Thinking: Cognitive Errors, Wishful Thinking and Sacred Truths" by Vexen Crabtree (2008)

Thankfully, the scientific method is all about eliminating human errors. Most of us like to learn from the things that other people tell us (anecdotes), but this isn't good enough for science, "even if the anecdotes number in the millions and even if the storytellers are Nobel Prize-winning anointed saints"26. Ideas have to be tested statistically and in a way where systemic human error can be controlled for and eliminated. The procedures of peer review and independent verification both ensure that mistakes in theory, application or analysis of data are spotted when other scientists examine and repeat the experiment. Competing theories will have proponents that actively do not want new information threatening their own theory, so will actively seek out weakspots in new experiments. Many scientific publications report on the failures of data and experiments, and in the face of this, science self-regulates in a very efficient and detailed manner. "The essence of science is that it is self-correcting" says the eminent scientist Carl Sagan, "new experimental results and novel ideas are continually resolving old mysteries" (1995)27.

Sometimes, then, individual experiments are found to be faulty. Sometimes, the conclusions of the scientist are questioned even if the data is good. And sometimes, rarest of all, entire scientific paradigms are questioned such as when Newtonian physics gave way to Einstein. New evidence can cause entire theoretical frameworks to be undermined, resulting in a scientific revolution in understanding.

4. Science is Revolutionary

As any lover of science will tell you, it's precisely this dynamism that keeps science exciting.

"New York Public Library Science Desk Reference"
Patricia Barnes-Svarney (1995)28

Book CoverAs a scientist, I am hostile to fundamentalist religion because it [...] teaches us not to change our minds

"The God Delusion"
Prof. Richard Dawkins (2006)29

Still perhaps it may appear better, nay to be our duty where the safety of the truth is concerned, to upset if need be even our own theories, specially as we are lovers of wisdom: for since both are dear to us, we are bound to prefer the truth.

"Ethics" by Aristotle (350BCE)30

Prof. Clive Bloom notes that intellectual activity is revolutionary because knowledge can impact an entire society and change it. "Whilst the intellectual may work in a specific field, it is in the nature of his or her work to have implications for the whole field of knowledge. It is this implication which is revolutionary"31.

Science is inherently revolutionary because theories remain theories forever. Theories are always open to doubt, to further testing and to refinement. Some theories are completely abandoned after new evidence comes to light or after new theories prove themselves to be better. As such, science is continually fresh and exciting. It is a continual intellectual search for truth, inspiration and occasional revelation. Theories tend to emerge together and tie-in with each other; so that often after a certain amount of progression a group of theories will become outdated, and a new group will replace them. "Science involves an endless succession of long, peaceful periods [... and then periods of] scientific revolution" (Kuln 1962).

Knowledge is fickle, frequently temporary and unstable: New facts can threaten so much of our worldview that it is not surprising that the general populace often struggle to accept new scientific theories. Non scientific minds often expect science to offer definite answers and absolute truth. However whenever a truth is declared absolute and beyond question, the search for truth becomes stagnant and eventually ludicrous as new data continues to come to light and cannot be explained or understood. The revolutionary nature of science ensures that we do not become complacent with what we think we know. The scientific method is the best way to search for truth; anything else becomes illusion and subjective: personal opinion rather than studied fact.

One must be prepared to overthrow an entire theoretical framework - and this has happened often in the history of science - but there has to be strong contravailing evidence that requires it. It is clear that skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence.

Paul Kurtz in Skeptical Inquirer (2006)32

5. Shortcomings

5.1. Subjectivism

Book Cover John Gribbin bemoans this last problem [1995]. He, like many other scientists, is aware that our Human brains cannot deal with abstraction beyond a certain point and that we have to use metaphors and symbols to understand the complex maths behind quantum physics. It is this that led the groundbreaking physicist P. Feynman to state that "no one understands quantum mechanics". Likewise, Stephen Hawking detailed in his book "A Brief History of Time" that the scientific search for a quantum theory of gravity will come to "revolutionize our understanding of space and time". These two prominent scientists are acknowledging that our understanding is culture-limited and paradigm-limited, but that sometimes mental barriers are shattered by new theories.

But to even out the balances we must first admit that all thought is subject to personal analysis. And it must be remembered that the whole scientific method serves, more than any other way of searching for truth, of removing personal bias and subjectivity. In that sense, science has proved itself to be by far the superior method of gathering facts about the world. No other system of thought has produced technology, accurate communications, accurate measurements and accurate predictions. By all these criteria, the scientific method has shown itself to be the best way to counteract the shortcomings of Human delusions.

5.2. Information Overload

An over-abundance of studies and data across the board can sometimes result in information overload. The plethora of scientific investigative techniques can also lead to the results of scientific endeavours being lost in the flood of data. Contradictory information can nearly always be found, even for simple experiments. The answer is to return to basics: checking sources, thinking critically and being methodical.

Reviewing the situation regarding the derivation of dates for the Dead Sea Scrolls from both carbon-14 tests and palaeography, my overall impression is that scholars tend to play up or down the various difficulties arising from the dating methods in order to confirm their own hypotheses about the scrolls and who wrote or copied them. As one scholar has said, those dates which confirm one's theories are emphasized, those that diverge a little are relegated to footnotes, and those which are totally contradicted are discarded!

"Dead Sea Scrolls" by Stephen Hodge (2001) [Book Review]35

As above, the over-abundance of testing methods can result in a kind of accidental research-selection blindness, in which a researcher stumbles across various articles, and respects (and therefore uses) the ones that he agrees with, subconsciously because they back up his present theory or predictions. Studies that diverge or disagree are left by the wayside, not because the scientist is consciously cheating, but because from the masses of data, he considers those that agree to be more worthwhile. Luckily, the study of such Human failings are within the bounds of sociology and are taught to scientists, but at Stephen Hodge points out above, problems still occur.

5.3. Pet Theories36

Scientists are still people and theories can become dear projects that scientists are reluctant to give up on - Coolican states that scientists get "energised by their theory and will bitterly defend it, getting quite passionate when challenged and working overtime to try to undermine any criticism"37. This results in a lot of heated debate. There are famous instances of scientists arguing tooth-and-nail for a beloved theory and giving it up after a long and protracted defense of it. But sometimes scientists cling to a theory well beyond its sell-by date. This psychological attachment to the produce of one's work is hardly surprising - it was noted by the world famous thinker Aristotle nearly 2500 years ago, for example:

Once more: all people value most what has cost them much labour in the production; for instance, people who have themselves made their money are fonder of it than those who have inherited it: and receiving kindness is, it seems, unlaborious, but doing it is laborious. And this is the reason why the female parents are most fond of their offspring; for their part in producing them is attended with most labour, and they know more certainly that they are theirs. This feeling would seem also to belong to benefactors.

"Ethics" by Aristotle (350BCE)38

Luckily this fierce resistance often works in science's favour. The best science and the most rigorous research is done between competing theorists, and the result is often an area of science that is richly investigated, with no stone left unturned. For example, fundamentalists religionists debated fiercely against the idea of the evolution of the eye, and in the ensuing arguments, scientists amassed what is now one of the greatest and most well-documented features of evolution, as the eye gradually evolved from mere light-sensitive cells, to the highly complex organ we know of today.

Although science eventually moves on, driven by evidence, individual believers are often left behind. This is why any initial research, initial experimental results or new theories are best left "to settle" and to face the trial of peer review before they are trusted. Each new theory steps on the toes of believers in previous theories, and in the ensuing battle the best critical minds will reveal which side of the argument is the most logical and best evidenced.

I quoted Coolican in the first paragraph in this section talking about scientists getting heated and working long hours to undermine criticism of their theory. What type of scientist did Coolican say he was talking about? The better ones. Forgetting the personal involvement of scientists, do not forget that battles are not private affairs, but public ones. Independent verification and scientific review take place around both sides of the argument, and the final result is, in accordance with the scientific method, the arrival at some objectivity.

6. A History of Science

6.1. Ionia, 6th century BCE

Book Cover2,500 years ago, there was a glorious awakening in Ionia: on Samos and the other nearby Greek colonies that grew up among the islands and inlets of the busy eastern Aegean Sea. Suddenly there were people who believed that everything was made of atoms; that human beings and other animals had sprung from simpler forms; that diseases were not caused by demons or the gods; that the Earth was only a planet going around the Sun. And that the stars were very far away. [...]

In the 6th century B.C., in Ionia, a new concept developed, one of the great ideas of the human species. The universe is knowable, the ancient Ionians argued, because it exhibits an internal order: there are regularities in Nature that permit its secrets to be uncovered. [...] This ordered and admirable character of the universe was called Cosmos. [...]

Between 600 and 400 B.C., this great revolution in human thought began. [...] The leading figures in this revolution were men with Greek names, largely unfamiliar to us today, the truest pioneers in the development of our civilization and our humanity.

"Cosmos" by Carl Sagan (1995)39

The city of Alexandria was the greatest in the ancient world. Its famous Library of Alexandria was constructed in the third century BCE by the Greek Kings, the Ptolemys. It became a scientific research centre and publishing capital of the world. Ionians forged ahead in many arenas of knowledge. "Eratosthenes accurately calculated the size of the Earth [...], Hipparchus anticipated that the stars come into being, slowly move during the course of centuries, and eventually perish, it was he who first catalogued the positions and magnitudes of the stars to detect such changes. Euclid produced a textbook on geometry from which humans learned for twenty-three centuries"40. Such astounding wisdom backed up by studious thinking and experimentation could have launched the world into the modern era. But it didn't.

Rising superstition, the taking of slaves and the growth of monotheistic religion led to the demise of scientific enterprise. The culture changed. The last great scientist of Alexandria, Hypatia, was born in 370CE at a time when the "growing Christian Church was consolidating its power and attempting to eradicate pagan influence and culture". Cyril, the Archbishop of Alexandria, considered Hypatia to be a symbol of the learning and science which he considered to be pagan. "In the year 415, on her way to work she was set upon by a fanatical mob of Cyril's parishioners. They dragged her from her chariot, tore off her clothes, and, armed with abalone shells, flayed her flesh from her bones. Her remains were burned, her works obliterated, her name forgotten. Cyril was made a saint"40.

The last remains of the Alexandrian Library were destroyed not long after Hypatia's death. Nearly all the books and documents were completely destroyed. The Western Dark Ages had begun, and all knowledge and science was forgotten in the West for over a thousand years.

6.2. The Rise of Science From the 17th Century

During the Middle Ages, the West had again begun to contribute to the science and learning of the world. In the interim the Arabic lands to the East had thankfully translated Greek works and carried the torch of knowledge. Philosopher-scientists emerged from the West and East, and debated the finer points of epistemology. As the centuries went on, thought became freer, and as the material life improved, the seventeenth century saw the dawning of a new age of human thought: modern scientific methods were back on the menu after nearly 2000 years in hiatus.

Almost everything that distinguishes the modern world from earlier centuries is attributable to science, which achieved its most spectacular triumphs in the seventeenth century. The Italian Renaissance, though not medieval, is not modern; it is more akin to the best age of Greece. [...] The modern world, so far as mental outlook is concerned, begins in the seventeenth century. No Italian of the Renaissance would have been unintelligible to Plato or Aristotle; Luther would have horrified Thomas Aquinas, but would not have been difficult for him to understand. With the seventeenth century it is different: Plato and Aristotle, Aquinas and Occam, could not have made head nor tail of Newton. [...]

Four great men - Copernicus [1473-1543], Kepler, Galileo, and Newton - are pre-eminent in the creation of science. Of these, Copernicus belongs to the sixteenth century, but in his own time he had little influence.

"History of Western Philosophy" by Bertrand Russell (1946)41

7. The Battles Between Science and Religion

Further reading on Science and Religion:

8. Open Access to Research

Secrecy can impede the progress of science, and openness is a hallmark of good science"

Prof. A. Scott
In Skeptical Inquirer (2007)42

Open Access speeds up the worldwide application of scientific research and allows theories and results to be tested, checked and analysed to scientists across the world, leading to more reliable science, data, technology for everyone. As much science is funded by government, the general populace should therefore have free access to its results.

8.1. Publishing Charges

#Australia #Belgium #Canada #Denmark #Finland #Germany #Hungary #Netherlands #Portugal #Sweden #UK #USA

A mass of valued research comes from university researchers that are funded by national government, costing hundreds of millions of dollars each year to support. Their results are published in peer-reviewed journals that have to be paid for; the publications are then bought by the Universities where the research is done. This is highly inefficient, and the public end up paying twice in order to read the results (when they pay the taxes that supports the research, and when they pay for the publications).

Prof. Michael Geist holds the Canada Research Chair in Internet and E-commerce Law at the University of Ottawa, and says “The model certainly proved lucrative for large publishers [but] the emergence of the internet dramatically changes the equation. Researchers are increasingly choosing to publish in freely available, open access journals posted on the internet, rather than in conventional, subscription-based publications”44.

Sweden leads the world in open access to research archives (see the chart). A Swedish project call the "Directory of Open Access Journals" links to scientific open access journals. It now lists more than 2500 worldwide, including over 127000 articles.44

Aided by the Open Journal System, a Canadian open source software project based at Simon Fraser University in British Columbia, more than 800 journals, many in the developing world, currently use the freely available OJS to bring their publications to the internet.

For those researchers committed to traditional publication, open access principles mandate that they self-archive their work by depositing an electronic copy in freely available institutional repositories shortly after publication. This approach grants the public full access to the work, while retaining the current peer-reviewed conventional publication model.

While today this self-archiving approach is typically optional, a growing number of funding agencies are moving toward a mandatory requirement. These include the National Institutes of Health in the US, the Wellcome Trust in the United Kingdom, and the Australian Research Council. Moreover, some countries are considering legislatively mandating open access.

Prof. Michael Geist (2007)44

8.2. An EU-wide Open Access Principle

Last month five leading European research institutions launched a petition that called on the European Commission to establish a new policy that would require all government-funded research to be made available to the public shortly after publication. That requirement - called an open access principle - would leverage widespread internet connectivity with low-cost electronic publication to create a freely available virtual scientific library available to the entire globe.

Despite scant media attention, word of the petition spread quickly throughout the scientific and research communities.

Within weeks, it garnered more than 20,000 signatures, including several Nobel Prize winners and 750 education, research, and cultural organisations from around the world.

In response, the European Commission committed more than $100m (£51m) towards facilitating greater open access through support for open access journals and for the building of the infrastructure needed to house institutional repositories that can store the millions of academic articles written each year.

Prof. Michael Geist (2007)44

It seems right that such a depository of publicly-funded research should be made available for free to the public that paid for it.

9. Conclusion

All Human thought is subjective and fallible. Cultural norms and assumptions disrupt serious attempts to search for truth. The scientific method minimizes Human error. Any system of thought that proclaims itself to be "ultimate" or beyond correction is dogmatic and wrong: The best theories are the ones that happily give way to better theories. The worst are the ones that do not budge and refuse to admit new evidence that disputes them. They become stagnant and outdated. Science is revolutionary because it accepts new facts, new evidence and new thought. New knowledge acquired from the scientific method frequently changes society with technology and ideas. It is refreshing, challenging and above all the scientific method continues to dynamically improve its description of the world.

By Vexen Crabtree 2014 Mar 12
(Last Modified: 2016 Mar 23)
Originally published 2006 May 19
http://www.humantruth.info/science.html
Parent page: Science and Truth Versus Mass Confusion

References: (What's this?)

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

New Scientist. UK based weekly science news paper (not subject to scientific peer-review though). Published by Reed Business Information Ltd, London, UK.

The Guardian. Respectable and generally well researched UK broadsheet newspaper. See Which are the Best and Worst Newspapers in the UK?.

Skeptical Inquirer. Pro-science magazine published bimonthly by the Committee for Skeptical Inquiry, New York, USA.

Aristotle. (384-322BCE)
(350BCE) Ethics. Amazon's Kindle digital edition. Originally published around 340BCE. Public Domain.

Barnes-Svarney, Patricia
(1995, Ed.) New York Public Library Science Desk Reference. Published by The Stonesong Press Inc. and The New York Public Library, New York, USA.

Bloom, Clive
(2001) Literature, Politics and Intellectual Crises in Britain Today. Published by Palgrave.

Carroll, Robert Todd. (1945-2016). Taught philosophy at Sacramento City College from 1977 until retirement in 2007. Created The Skeptic's Dictionary in 1994.
(2011) Unnatural Acts: Critical Thinking, Skepticism, and Science Exposed!. Kindle edition. Published by the James Randi Educational Foundation.

Coolican, Hugh
(2004) Research Methods and Statistics in Psychology. Fourth edition. Published by Hodder Headline, London, UK.

Crabtree, Vexen
(2006) "Christianity v. Astronomy: The Earth Orbits the Sun!" (2006). Accessed 2016 Dec 24.
(2008) "Errors in Thinking: Cognitive Errors, Wishful Thinking and Sacred Truths" (2008). Accessed 2016 Dec 24.

Dawkins, Prof. Richard
(1976) The Selfish Gene. 30th Anniversary 2006 edition, published by the Oxford University Press, UK.
(2006) The God Delusion. Hardback. Published by Bantam Press, Transworld Publishers, Uxbridge Road, London, UK.

Einstein, Albert. (1879-1955)
(1954) Ideas and Opinions. Published in 1954 by Crown Publishers, New York, USA and in 1982 by Three Rivers Press. A collection of Einstein's writings and texts.

Fara, Patricia
(2009) Science: A Four Thousand Year History. Hardback. Fara has a PhD in History of Science from London University. Published by Oxford University Press.

Gilovich, Thomas
(1991) How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. 1993 paperback edition published by The Free Press, NY, USA.

Goldacre, Ben. MD.
(2008) Bad Science. Published by Fourth Estate, an imprint of HarperCollins Publishers, London, UK.

Gross, Richard
(1996) Psychology: The Science of Mind and Behaviour. 3rd edition. Published by Hodder & Stoughton, London UK.

Hawking, Stephen
A Brief History of Time.

Hodge, Stephen
(2001) Dead Sea Scrolls. Paperback first edition published by Piatkus books, London UK. [Book Review]

Kuhn, T.S.
"The Structure of Scientific Revolutions" (1962). University of Chicago Press, Chicago, USA. Via Gross (1996).

Popper, K.R.
(1959) The Logic of Scientific Discovery. Published by Hutchinson, London, UK.

Russell, Bertrand. (1872-1970)
(1935) Religion and Science. 1997 edition with introduction by Michael Ruse. Published by Oxford University Press, Oxford, UK.
(1946) History of Western Philosophy. Quotes from 2000 edition published by Routledge, London, UK.

Sagan, Carl
(1995) Cosmos. Originally published 1981 by McDonald & Co. This edition published by Abacus.

Stenger, Prof. Victor J.
(2007) God, the Failed Hypothesis: How Science Shows That God Does Not Exist. Published by Prometheus Books. Stenger is a Nobel-prize winning physicist, and a skeptical philosopher whose research is strictly rational and evidence-based.

Wilson, E. O.
(1998) Consilience: The Unity of Knowledge. Hardback. Published by Little, Brown and Company, London, UK. Professor Wilson is one of the foremost sociobiologists.

Footnotes

  1. Dennett, Daniel C. from an essay that will be published 'later this year' in "Philosophers Without God" , preview in Skeptical Inquirer (2007 Mar/Apr) p44.^
  2. Added to this page on 2014 Sep 10. Ben Goldacre's Bad Evidence, BBC Radio 4 programme aired on 2013 Jan 01 at 2000hrs.^
  3. Dawkins (2004) p210.^
  4. Wilson (1998) p57. Added to this page on 2010 Jul 11.^
  5. Einstein (1950) Scientific American Vol. 182, No. (1950 Apr 04). In Einstein (1954) p342.^
  6. Russell (1946) p69.^
  7. Russell (1935) p13-14.^
  8. Coolican (2004) p333. Coolican adds that 'Researchers mostly have a background of theoretical argument and previous research findings that leads them to a reasonable argument for the effect they are expecting'. Added to this page on 2014 Mar 10.^
  9. Dawkins (1976) p1.^
  10. Coolican (2004) p15. Added to this page on 2014 Mar 10.^
  11. Stenger (2007) p26, cites Philosophy of Science B 3 (1936): 19-21; B 4 (1937): 1-40.^
  12. Fara (2009) p301.^
  13. Stenger (2007) p26.^
  14. Added to this page on 2014 Mar 11.^
  15. Coolican (2004) p16. Author cites Mitroff, I.I. (1974) Studying the lunar rock scientist. Saturday Review World, 2 November, 64-5. Added to this page on 2015 Nov 20.^
  16. Coolican (2004) p16. Added to this page on 2015 Nov 20.^^
  17. Coolican (2004) p17, 333. Added to this page on 2014 Mar 09.^
  18. Gilovich (1991) p168, 56-57.^
  19. Einstein (1936). From the Journal of the Franklin Institute, Vol. 221, No. 3, March 1936. Via Einstein (1954) p293.^
  20. Russell (1946) p462-463.^
  21. Added to this page on 2015 Sep 06.^
  22. Goldacre (2008) digital location 725-730,748, 773. Added to this page on 2015 Sep 06.^
  23. D. Koepsell in Skeptical Inquirer (2006 Sep/Oct) Vol 30:Issue 5. David Koepsell, a philosopher and lawyer, is the executive director of the Council for Secular Humanism, as associate editor of Free Inquiry, and an adjunct associate professor in the Department of Philosophy at the University of Buffalo, USA.^
  24. Bob Park, professor of physics at the University of Maryland, writing in New Scientist (2006 Dec 09) p48-49. Added to this page on 2007 Jan 17.^
  25. Carroll (2011) p123. Added to this page on 2014 Jan 11.^
  26. Carroll (2011) p131. Added to this page on 2014 Jan 11.^
  27. Sagan (1995) p16.^
  28. Barnes-Svarney (1995) p. xix.^
  29. Dawkins (2006) p284.^
  30. Aristotle (350BCE) Digital location 412-13. Added to this page on 2013 Mar 04.^
  31. Bloom (2001) p165.^
  32. Skeptical Inquirer (2006 Sep/Oct) vol 30:issue 5.^
  33. Gross (1996) p25.^
  34. Goldacre (2008).^
  35. Hodge (2001) p78.^
  36. Added to this page on 2013 Mar 04.^
  37. Coolican (2004) p16. Added to this page on 2014 Mar 10.^
  38. Aristotle (350BCE) Book IX chapter IV. Added to this page on 2013 Mar 04.^
  39. Sagan (1995) p194-5.^
  40. Sagan (1995) p364,366.^
  41. Russell (1946) p512.^
  42. Skeptical Inquirer (2007 May/Jun) p42. Alan Scott is professor of physics at the University of Wisconsin-Stout in Menomonie, Wisconsin, 54751. He received his PhD in 1995 from Kent State University in experimental nuclear physics. His quote referenced the American Physical Society Council (1999) What is Science Policy Statement.^
  43. The Guardian (2005 May 17) "Britain at forefront of move to freely available research".^
  44. Prof. Michael Geist BBC News article "Push for open access to research" (2007 Feb 28). Article accessed 2007 Apr 12.^^

© 2016 Vexen Crabtree. All rights reserved.