Having read Chapter 1, you should have some idea of the importance of experimental uncertainties and how they arise. You should also understand how uncertainties can be estimated in a few simple situations. In this chapter, you will learn some basic notations and rules of error analysis and study examples of their use in typical experiments in a physics laboratory. The aim is to familiarize you with the basic vocabulary of error analysis and its use in the introductory laboratory. Chapter 3 begins a systematic study of how uncertainties are actually evaluated.
Section 2.1 to 2.3 define several basic concepts in error analysis and discuss general rules for stating uncertainties. Sections 2.4 to 2.6 discuss how these ideas could be used in typical experiments in an introductory physics laboratory. Finally, Sections 2.7 to 2.9 introduce fractional uncertainty and discuss its significance.
2.1 Best Estimate
Uncertainty
We have seen that the correct way to state the result of measurement is to give a best estimate of the quantity and the range within which you are confident the quantities lies. For example, the result of the timings discussed in Section 1.6 was reported as
best estimate of time = 2.4 s,
probable range: 2.3 to 2.5 s. (2.1)
Here, the best estimate, 2.4 s lies at the midpoint of the estimated range of probable values, 2.3 to 2.5 s, as it has in all the examples. This relationship is obviously natural and pertains in most measurements. It allows the results of the measurement to be expressed in compact form. For example, the measurement of the time recorded in (2.1) is usually stated as follows:
measured value of time = 2.4 0.1 s (2.2)
This single equation is equivalent to the two statements in (2.1).
In general, the result of any measurement of a quantity x is stated as
(measured value of x) = (2.3)
This statement means, first, that the experimenter's best estimate for the quantity concerned is the number , and second, that he or she is reasonably confident the quantity lies somewhere between
and
. The number
is called the uncertainty, or error, or margin of error in the measurement of x. For convenience, the uncertainty
is always defined to be positive, so that
is always the highest probable value of the measured quantity and
the lowest.
I have intentionally left the meaning of the range to
somewhat vague, but it can sometimes be made more precise. In a simple measurement such as that of the height of a doorway, we can easily state a range
to
within which we are absolutely certain the measured quantity lies. Unfortunately, in most scientific measurements, such a statement is hard to make. In particular, to be completely certain that the measured quantity lies between
and
, we usually have to choose a value for
that is too large to be useful. To avoid this situation, we can sometimes choose a value for
that lets us state with a certain percent confidence that the actual quantity lies within the range
. For instance, the public opinion polls conducted during elections are traditionally stated with margins of error that represent 95% confidence limits. The statement that 60% of the electorate favor Candidate A, with a margin of error of 3 percentage points (
), means that the pollsters are 95% confident that the percent of voters favoring Candidate A is between 57 and 63; in other words, after many elections, we should expect the correct answer to have been inside the stated margins of error 95% of the times and outside these margins only 5% of the times.
Obviously, we cannot state a percent confidence in our margins of error until we understand the statistical laws that govern the process of measurement. I return to this point in Chapter 4. For now, let us be content with defining the uncertainty so that we are "reasonably certain" the measured quantity lies between
and
.
2.2 Significant Figures
Several basic rules for stating uncertainties are worth emphasizing. First, because the quantity is an estimate of an uncertainty, obviously it should not be stated with too much precision. If we measure the acceleration of gravity g, it would be absurd to state a result like
(measured g) = (2.4)
The uncertainty in the measurement cannot conceivably be known to four significant figures. In high-precision work, uncertainties are sometimes stated with two significant figures, but for our purposes we can state the following rule:
Rule for Stating Uncertainties
Experimental uncertainties should almost always be rounded to one significant figure. (2.5)
Thus, if some calculation yields the uncertainty , this answer should be rounded to
, and the conclusion (2.4) should be rewritten as
(measured g) =
(2.6)
An important practical consequence of this rule is that many error calculations can be carried out mentally without using a calculator or even pencil and paper.
The rule (2.5) has only one significant exception. If the leading digit in the uncertainty is a 1, then keeping two significant figures in
may be better. For example, suppose that some calculation gave the uncertainty
. Rounding this number to
would be a substantial proportional reduction, so we could argue that retaining two figures might be less misleading, and quote
. The same argument could perhaps be applied if the leading digit is a 2 but certainly not if it is any larger.
Once the uncertainty in a measurement has been estimated, the significant figures in the measured value must be considered. A statement such as
measured speed = (2.7)
is obviously ridiculous. The uncertainty of 30 means that the digit 5 might really be as small as 2 or as large as 8. Clearly the trailing digits 1, 7, and 8 have no significance at all and should be rounded. That is, the correct statement of (2.7) is
measured speed= (2.8)
The general rule is this:
Rule for Stating Answers
The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty. (2.9)
For example, the answer 92.81 with an uncertainty of 0.3 should be rounded as
If its uncertainty is 3, then the same answer should be rounded as
,
and if the uncertainty is 30, then the answer should be
.
An important qualification to rules (2.5) and (2.9) is as follows: To reduce inaccuracies caused by rounding, any numbers to be used in subsequent calculations should normally retain at least one significant figure more than is finally justified. At the end of the calculations, the final answer should be rounded to remove these extra, insignificant figures. An electronic calculator will happily carry numbers with far more digits than are likely to be significant in any calculation you make in a laboratory. Obviously, these numbers do not need to be rounded in the middle of a calculation but certainly must be rounded appropriately for the final answers.
Note that the uncertainty in any measured quantity has the same dimensions as the measured quantity itself. Therefore, writing the units after both the answer and the uncertainty is clearer and more economical, as in Equations (2.6) and (2.8). By the same token, if a measured number is so large or small that it calls for scientific notation (the use of the form instead of 3000, for example), then it is simpler and clearer to put the answer and uncertainty in the same form. For example, the result
measured charge = coulombs
is much easier to read and understand in this form than it would be in the form
measured charge
2.3 Discrepancy
Before I address the question of how to use uncertainties in experimental reports, a few important terms should be introduced and defined. First, if two measurements of the same quantity disagree, we say there is a discrepancy. Numerically, we define the discrepancy between two measurements as their difference:
discrepancy = difference between two measured values of the same quantity. (2.10)
More specifically, each of the two measurements consists of a best estimate and an uncertainty, and we define the discrepancy as the difference between the two best estimates. For example, if two students measure the same resistance as follows
Student A: ohms
and
Student B: ohms,
their discrepancy is
discrepancy = ohms.
Recognize that a discrepancy may or may not be significant. The two measurements just discussed are illustrated in Figure 2.1(a), which shows clearly that the discrepancy of 10 ohms is significant because no single value of the resistance is compatible with both measurements. Obviously, at least one measurement is incorrect, and some careful checking is needed to find out what went wrong.
Figure 2.1(a) Two measurements of the same resistance. Each measurement includes a best estimate, shown by a block dot, and a range of probable values, shown by a vertical error bar. The discrepancy (difference between the two best estimates) is 10 ohms and is significant because it is much larger than the combined uncertainty in the two measurements. Almost certainly, at least one of the experimenters made a mistake. (b) Two different measurements of the same resistance. The discrepancy is again 10 ohms, but in this case it is insignificant because the stated margins of error overlap. There is no reason to doubt either measurement (although they could be criticized for being rather imprecise).
Suppose, on the other hand, two other students had reported these results:
Student C: Ohms
and
Student D: Ohms
Here again, the discrepancy is 10 ohms, but in this case the discrepancy is insignificant because, as shown in Figure 2.1(b), the two students' margins of error overlap comfortably and both measurements could well be correct. The discrepancy between two measurements of the same quantity should be accessed not just by its size but, more importantly, by how big it is compared with the uncertainties in the measurements.
In the teaching laboratory, you may be asked to measure a quantity that has been measured carefully many times before, and for which an accurate accepted value is known and published, for example, the electron's charge or the universal gas constant. This accepted value is not exact, of course; it is the result of measurements and, like all measurements, has some uncertainty. Nonetheless, in many cases the accepted value is much more accurate than you could possibly achieve yourself. For example, the currently accepted value of the universal gas constant R is
(accepted R) = J/(mol K) (2.11)
As expected, this value is uncertain, but the uncertainty is extremely small by the standards of most teaching laboratories. Thus, when you compare your measured value of such a constant with the accepted value, you can usually treat the accepted value as exact.
Although many experiments call for measurement of a quantity whose accepted value is known, few require measurement of a quantity whose true value is known. In fact, the true value of a measured quantity can almost never be known exactly and is, in fact, hard to define. Nevertheless, discussing the difference between a measured value and the corresponding true value is sometimes useful. Some authors call this difference the true error.
2.4 Comparison of Measured and Accepted Values
Performing an experiment without drawing some sort of conclusion has little merit. A few experiments may have mainly qualitative results - the appearance of an interference pattern on a ripple tank or the color of light transmitted by some optical system - but the vast majority of experiments lead to quantitative conclusions, that is, to a statement of numerical results. It is important to recognize that the statement of a single measured number is completely uninteresting. Statements that the density of some metal was measured as or that the momentum of a cart was measured as
are, by themselves, of no interest. An interesting conclusion must compare two or more numbers: a measurement with the accepted value, a measurement with a theoretically predicted value, or several measurements, to show that they are related to one another in accordance with some physical law. It is such comparison of numbers that error analysis is so important. This and the next two sections discuss three typical experiments to illustrate how the estimated uncertainties are used to draw a conclusion.
Perhaps the simplest type of experiment is a measurement of a quantity whose accepted value is known. As discussed, this exercise is a somewhat artificial experiment peculiar to the teaching laboratory. The procedure is to measure the quantity, estimate the experimental uncertainty, and compare these values with the accepted value. Thus, in an experiment to measure the speed of sound in air (at standard temperature and pressure), Student A might arrive at the conclusion
A's measured speed = (2.12)
compared with the
accepted speed = (2.13)
Student A might choose to display this result graphically as in Figure 2.2. She should certainly include in her report both Equations (2.12) and (2.13) next to each other, so her readers can clearly appreciate her result. She should probably add an explicit statement that because the accepted value lies inside her margins of error, her measurement seems satisfactory.
Figure 2.2. Three measurements of the speed of sound at standard temperature and pressure. Because the accepted value (331 m/s) is within Student A's margins of error, her result is satisfactory. The accepted value is just outside Student B's margin of error, but his measurement is nevertheless acceptable. The accepted value is far outside Student C's stated margins, and his measurement is definitely unsatisfactory.
The meaning of the uncertainty is that the correct value of
probably lies between
and
; it is certainly possible that the correct value lies slightly outside this range. Therefore, a measurement can be regarded as satisfactory even if the accepted value lies slightly outside the estimated range of the measured value. For example, if Student B found the value
B's measured speed = ,
he could certainly claim that his measurement is consistent with the accepted value of 331 m/s.
On the other hand, if the accepted value is well outside the margins of error (the discrepancy is appreciably more than twice the uncertainty, say), there is reason to think something has gone wrong. For example, suppose the unlucky Student C finds
C's measured speed = (2.14)
compared with the
accepted speed = 331 m/s (2.15)
Student C's discrepancy is 14 m/s, which is seven times bigger than his stated uncertainty (see Figure 2.2). He will need to check his measurements and calculations to find out what has gone wrong.
Unfortunately, the tracing of C's mistake may be a tedious business because of the numerous possibilities. He may have made a mistake in the measurements or calculations that led to the answer 345 m/s. He may have estimated his uncertainty incorrectly. (The answer m/s would have been acceptable.) He also might be comparing his measurement with the wrong accepted value. For example, the accepted value 331 m/s is the speed of sound at standard temperature and pressure. Because standard temperature is 0 degree, there is a good chance the measured speed in (2.14) was not taken at standard temperature. In fact, if the measurement was made at 20 degree (that is, normal room temperature), the correct accepted value for the speed of sound is 343 m/s, and the measurement would be entirely acceptable.
Finally, and perhaps most likely, a discrepancy such as that between (2.14) and (2.15) may indicate some undetected source of systematic error (such as a clock that runs consistently slow, as discussed in Chapter 1). Detection of such systematic errors (ones that consistently push the result in one direction) requires careful checking of the calibration of all instruments and detailed review of all procedures.
2.5 Comparison of Two Measured Numbers
Many experiments involve measuring two numbers that theory predicts should be equal. For example, the law of conservation of momentum states that the total momentum of an isolated system is constant. To test it, we might perform a series of experiments with two carts that collide as they move along a frictionless track. We could measure the total momentum of the two carts before (p) and after (q) they collide and check whether p = q within experimental uncertainties. For a single pair of measurements, our results could be
initial momentum p =
and
final momentum q =
Figure 2.3. Measured values of the total momentum of two carts before (p) and after (q) a collision. Because the margins of error for p and q overlap, these measurements are certainly consistent with conservation of momentum (which implies that p and q should be equal).
Here, the range in which p probably lies (1.46 to 1.52) overlaps the range in which q probably lies (1.50 to 1.62). (See Figure 2.3.) Therefore, these measurements are consistent with conservation of momentum. If, on the other hand, the two probable ranges were not even close to overlapping, the measurements would be inconsistent with conservation of momentum, and we would have to check for mistakes in our measurements or calculations, for possible systematic errors, and for the possibility that some external forces (such as gravity or friction) are causing the momentum of the system to change.
If we repeat similar pairs of measurements several times, what is the best way to display our results? First, using a table to record a sequence of similar measurements is usually better than listing the results as several distinct statements. Second, the uncertainties often differ little from one measurement to the next. For example, we might convince ourselves that the uncertainties in all measurements of the initial momentum p are about and that the uncertainties in the final q are all about
. If so, a good way to display our measurements would be as shown in Table 2.1.
For each pair of measurements, the probable range of values for p overlaps (or nearly overlaps) the range of values for q. If this overlap continues for all measurements, our results can be pronounced consistent with conservation of momentum. Note that our experiment does not prove conservation of momentum; no experiment can. The best you can hope for is to conduct many more trials with progressively smaller certainties and that all the results are consistent with conservation of momentum.
In a real experiment, Table 2.1 might contain a dozen or more entries, and checking that each final momentum q is consistent with the corresponding initial momentum p could be tedious. A better way to display the results would be to add a fourth column that lists the differences p - q. If momentum is conserved, these values should be consistent with zero. The only difficulty with this method is that we must now compute the uncertainty in the difference p - q. This computation is performed as follows. Suppose we have made measurements
(measured p) =
and
(measured q) =
The numbers and
are our best estimates for p and q. Therefore, the best estimate for the difference (p - q) is
. To find the uncertainty in (p -q), we must decide on the highest and lowest probable values of (p-q). The highest value for (p-q) would result if p had its largest probable value,
, at the same time that q had its smallest value
. Thus, the highest probable value for p - q is
highest probable value = (2.16)
Similarly, the lowest probable value arises when p is smallest , but q is largest
. Thus,
lowest probable value = (2.17)
Combining Equations (2.16) and (2.17), we see that the uncertainty in the difference (p - q) is the sum of the original uncertainties. For example, if
and
then
We can now add an extra column for p - q to Table 2.1 and arrive at Table 2.2.
Whether our results are consistent with conservation of momentum can now be seen at a glance by checking whether the numbers in the final column are consistent with zero (that is, are less than, or comparable with, the uncertainty 0.09). Alternatively, and perhaps even better, we could plot the results as in Figure 2.4 and check visually. Yet another way to achieve the same effect would be to calculate the ratios q/p, which should all be consistent with the expected value q/p = 1. (Here, we would need to calculate the uncertainty in q/p, a problem discussed in Chapter 3.)
Figure 2.4. Three trials in a test of the conservation of momentum. The student has measured the total momentum of two carts before and after they collide (p and q, respectively). If momentum is conserved, the differences p - q should all be zero. The plot shows the value of p - q with its error bar for each trial. The expected value 0 is inside the margins of error in trials 1 and 2 and only slightly outside the trial 3. Therefore, these results are consistent with the conservation of momentum.
Our discussion of the uncertainty in p - q applies to the difference of any two measured numbers. If we had measured any two numbers x and y and used our measured values to compute the difference x - y, but the argument just given, the resulting uncertainty in the difference would be the sum of the separate uncertainties in x and y. We have, therefore, established the following provisional rule:
Uncertainty in a Difference (Provisional Rule)
If two quantities x and y are measured with uncertainties and
, and if the measured values x and y are used to calculate the difference q = x - y, the uncertainty in q is the sum of the uncertainties in x and y:
(2.18).
I call this rule "provisional" because we will find in Chapter 3 that the uncertainty in the quantity q = x - y is often somewhat smaller than that given by Equation (2.18). Thus, we will be replacing the provisional rule (2.18) by an "improved" rule - in which the uncertainty is q = x - y is given by the so-called quadratic sum of and
, as defined in Equation (3.13). Because this improved rule gives a somewhat smaller uncertainty for q, you will want to use it when appropriate. For now, however, let us be content with the provisional rule (2.18) for three reasons: (1) The rule (2.18) is easy to understand - much more so than the improved rule of Chapter 3. (2) In most cases, the difference between the two rules is small. (3) The rule (2.18) always gives an upper bound on the uncertainty in q = x - y; thus, we know at least that the uncertainty in x - y is never worse than the answer given in (2.18).
The result (2.18) is the first in a series of rules for the propagation of errors. To calculate a quantity q in terms of measured quantities x and y, we need to know how the uncertainties in x and y "propagate" to cause uncertainty in q. A complete discussion of error propagation appears in Chapter 3.
2.6 Checking Relationship with a Graph
Many physical laws imply that one quantity should be proportional to another. For example, Hooke's law states that the extension of a spring is proportional to the force stretching it, and Newton's law says that the acceleration of a body is proportional to the total applied force. Many experiments in a teaching laboratory are designed to check this kind of proportionality.
If one quantity y is proportional to some other quantity x, a graph of y against x is a straight line through the origin. Thus, to test whether y is proportional to x, you can plot the measured values of y against those of x and note whether the resulting points do lie on a straight line through the origin. Because a straight line is so easily recognizable, this method is a simple, effective way to check for proportionality.
To illustrate this use of graphs, let us imagine an experiment to test Hooke's law. This law, usually written as F=kx, asserts that the extension x of a spring is proportional to the force F stretching it, so x = F/k, where k is the "force constant" of the spring. A simple way to test this law is to hang the spring vertically and suspend various masses m from it. Here, the force F is the weight mg of the load; so the extension should be
(2.19)
The extension x should be proportional to the load m, and a graph of x against m should be a straight line through the origin.
If we measure x for a variety of different loads m and plot our measured values of x and m, the resulting points almost certainly will not lie exactly on a straight line. Suppose, for example, we measure the extension x for eight different loads m and get the results shown in Table 2.3. These values are plotted in Figure 2.5(a), which also shows a possible straight line that passes through the origin and is reasonably close to all eight points. As we should have expected, the eight points do not lie exactly on any line. The question is whether this result stems from experimental uncertainties (as we would hope), from mistakes we have made, or even from the possibility the extension x is not proportional to m. To answer this question, we must consider our uncertainties.
As usual, the measured quantities, extension x and masses m, are subject to uncertainty. For simplicity, let us suppose that the masses used are known very accurately, so that the uncertainty in m is negligible. Suppose, on the other hand, that all measurements of x have an uncertainty of approximately 0.3 cm (as indicated in Table 2.3). For a load of 200 grams, for example, the extension would probably be in the range 1.1 0.3 cm. Our first experiment point on the graph thus lies on the vertical line m = 200 grams, somewhere between x =0.8 and x=1.4 cm. This range is indicated in Figure 2.5(b), which shows an error bar through each point to indicate the range in which it probably lies. Obviously, we should expect to find a straight line that goes through the origin and passes through or close to all the error bars. Figure 2.5(b) has such a line, so we conclude that the data on which Figure 2.5(b) is based are consistent with x being proportional to m.
Figure 2.5. Three plots of extension x of a spring against the load m. (a) The data of Table 2.3 without error bars. (b) The same data with error bars to show the uncertainties in x. (The uncertainties in m are assumed to be negligible.) These data are consistent with the expected proportionality of x and m. (c) A different set of data, which are inconsistent with x being proportional to m.
We saw in Equation (2.19) that the slop of the graph of x against m is g/k. By measuring the slope of the line in Figure 2.5(b), we can therefore find the constant k of the spring. By drawing the steepest and least steep lines that fit the data reasonably well, we could also find the uncertainty in this value for k.
If the best straight line misses a high proportion of the error bars or if it misses any be a large distance (compared with the length of the error bars), our results would be inconsistent with x being proportional to m. This situation is illustrated in Figure 2.5(c). With the results shown there, we would have to recheck our measurements and calculations (including the calculation of the uncertainties) and consider whether x is not proportional to m for some reason. [In Figure 2.5(c), for instance, the first five points can be fitted to a straight line through the origin. This situation suggests that x may be proportional to m up to approximately 600 grams, but that Hooke's law breaks down at that point and the spring starts to stretch more rapidly.]
Thus far, we have supposed that the uncertainties are in x, as shown in the vertical error bars. If both x and m are subject to appreciable uncertainties the simplest way to display them is to draw vertical and horizontal error bars, whose lengths show the uncertainties in x and m respectively, as in Figure 2.6.
Figure 2.6. Measurements that have uncertainties in both variables can be shown by crosses made up of one error bar for each variable.
Each cross in this plot corresponds to one measurement of x and m, in which x probably lies in the interval defined by the vertical bar of the cross and m probably in that defined by the horizontal bar.
A slightly more complicated possibility is that some quantity may be expected to be proportional to a power of another. (For example, the distance traveled by a freely falling object in a time t is d=1/2gt2 and is proportional to the square of t.) Let us suppose that y is expected to be proportional to . Then
(2.20)
where A is some constant, and a graph of y against x should be a parabola with the general shape of Figure 2.7(a). If we were to measure a series of values for y and x and plot y against x, we might get a graph something like that in Figure 2.7(b). Unfortunately, visually judging whether a set of points such as these fit a parabola (or any other curve, except a straight line) is very hard. A better way to check that is to plot y against x squared. From equation (2.20), we see that such a plot should be a straight line, which we can check easily as in Figure 2.7(c).
Figure 2.7. (a) If y is proportional to In the same way, if (where n is any power), a graph of y against
should be a straight line, and by plotting the observed values of y against
, we can check easily for such a fit. There are various other situations in which a nonlinear relation (that is, one that gives a curved-nonlinear-graph) can be converted into a linear one by a clever choice of variables to plot. Section 8.6 discusses an important example of such "linearization", which is worth mentioning briefly here. Often one variable y depends exponentionally on another variable x:
(For example, the activity of a radioactive sample depends exponentially on time.) For such relations, the natural logarithm of y is easily shown to be linear in x; that is, a graph of against x should be a straight line for an exponential relationship.
Many other, nongraphical ways are avaialbe to check the proportionality of two qunatities. For example, if , the ration y/x should be constant. Thus, having tabulated the measured values of y and x, you could simply add a column to the table that shows the ratios y/x and check that these ratios are constant within their experimental uncertainties. Many calculators have a built-in function (called the correlation coefficient) to show how well a set of measurements fits a straight line. (This function is discussed in Section 9.3). Even when another method is used to check that
, making the graphical check as well is an excellent practice. Graphs such as those in Figures 2.5(b) and (c) show clearly how well (or badly) the measurements verify the predictions; drawing such graphs helps you understand the experiment and the physical laws involved.










网友评论