site map appendix about course read first lab manual
read first
read first
site map read first about course lab manual appendix


Experimental error – a fact of scientific life.

            Experimental error is always with us; it is in the nature of scientific measurement that uncertainty is associated with every quantitative result. This may be due to inherent limitations in the measuring equipment, or of the measuring techniques, or perhaps the experience and skill of the experimenter. However mistakes do not count as part of the analysis, though it has to be said that some of the accounts given by students dwell too often on mistakes – blunders, let's not be coy –   and too seldom on the quantitative assessment of error. Perhaps it's easier to do so, but it is not quantitative and does not present much of a test of the quality of the results.
            The development of the skill of error assessment is the purpose of these pages. They are not intended as a course in statistics, so there is nothing concerning the analysis of large amounts of data.

The Origin

            Errors – or uncertainties in experimental data – can arise in numerous ways. Their quantitative assessment is necessary since only then can a hypothesis be tested properly. The modern theory of atomic structure is believed because it quantitatively predicted all sorts of atomic properties; yet the experiments used to determine them were inevitably subject to uncertainty, so that there has to be some set of criteria that can be used to decide whether two compared quantities are the same or not, or whether a particular reading truly belongs to a set of readings. Melting point results from a given set of trials is an example of the latter.

Blunders (mistakes).
            Mistakes (or the much stronger 'blunder') such as, dropping a small amount of solid on the balance pan, are not errors in the sense meant in these pages.
Unfortunately many critiques of investigations written by students are fond of quoting blunders as a source of error, probably because they're easy to think of. They are neither quantitative nor helpful; experimental error in the true sense of uncertainty cannot be assessed if the experimenter was simply unskilled.

Human error.
            This is often confused with blunders, but is rather different – though one person's human error is another's blunder, no doubt. Really it hinges on the experimenter doing the experiment truly to the best of his ability, but being let down by inexperience. Such errors lessen with practice. They also do not help in the quantitative assessment of error.  An example of this would be transferring solids from the weighing boats to a test tube
          Only if the human error has a significant impact on the experiment should the student mention it.

 Instrumental limitations.
            Uncertainties are inherent in any measuring instrument. A ruler, even if as well-made as is technologically possible, has calibrations of finite width; a 25.0 cm3 pipette of grade B accuracy delivers this volume to within 0.06 cm3 if used correctly. A digital balance showing three decimal places can only weigh to within 0.0005 g by its very nature and even then only if it rounds the figures to those three places.
            Calibrations are made under certain conditions, which have to be reproduced if the calibrations are to be true within the specified limits. Volumetric apparatus is usually calibrated for 20oC, for example; the laboratory is usually at some other temperature.
            Analogue devices such as thermometers or pipettes often require the observer to interpolate between graduations on the scale. Some people will be better at this than others.
            These limitations exist and are unlikely significant errors in your experiment

Observing the system may cause errors.
            If you have a hot liquid and you need to measure its temperature, you will dip a thermometer into it. This will inevitably cool the liquid slightly. The amount of cooling is unlikely to be a source of major error, but it is there nevertheless.

Errors due to external influences.
            Such errors may come from draughts on the balance pan, for example (though this seems pretty close to a blunder), or maybe from impurity in the chemicals used. Again such things are unlikely to be significant in a carefully-designed and executed experiment, but are often discussed by students, again because they are fairly obvious things.

Not all measurements have well-defined values.
            The temperature of a system, or its mass, for example, has particular values which can be determined to acceptable degrees of uncertainty with suitable care. Other properties do not; the diameter of a planet, for example, although quoted in tables of data, is a mean value. The same is true for the thickness of a piece of paper or the diameter of a wire. These measurements will vary somewhat at different places. It is important to realize what sort of data you are dealing with.

            Many scientific measurements are made on populations. Such as final value that you report for melting point is from a population, albeit rather a small one. It is intuitively understood that the more samples you have from a given population the less the error is likely to be. It is why students shouldn’t be satisfied with one melting point of a substance, but should obtain at least two melting points.
            Related to this are errors arising from unrepresentative samples. Suppose that a chemist wishes to time a particular reaction in a certain hood that is situated near a drafty vent in lab. The rate of this reaction will depend on how drafty that area, if the heating or cooling is on, the ambient temperature of the lab during busy and slow periods etc. So a measurement made at 3 o'clock on a Friday afternoon may be utterly unrepresentative of the mean rate of the reaction at some other location in lab or time period. It doesn't matter how many samples one takes – if the sampling method is this biased, a true picture cannot be obtained. Therefore a large sampling does not of itself ensure greater accuracy.
            The bias in this example is fairly obvious. This is not always so, even to experienced investigators. Sir Ronald Fisher's famous text 'The Design of Experiments' deals with the difficulties of removing bias in biological investigations, and is a work on statistical methods. Although this degree of analysis may seem outside of our realm of experimental work, it will not be so if you go on to do research in many fields of science.

            In conclusion, when assessing possible errors in your experiment, try to determine the importance of any error on your final result and only list errors which cause a significant impact on your experimental data.

* Adapted from : http://home.clara.net/rod.beavon/err_orig.htm