Home » Error Analysis

Error Analysis

Prepared by Tom Mastin PLS

When we are discussing Error Theory in surveying we are only considering the effect of random errors on our measurements. It is assumed that blunders and systematic errors have been removed from measurements.

Error Theory in surveying has two components; first what is the statistical precision that is associated with any measurement or position (usually we only look at position) and second what is an appropriate way to distribute the error that is within our measurements.

Before looking at the statistics of measurement, we need to define a few words.

  • Accuracy – The relation of a measurement to the true value
  • Precision – The quality of the repeatability of a measurement

Accuracy is what we want to know in all measurements, but because there are always errors in measurement (except for count), we never know the true value. We therefore use precision as an indicator of accuracy.

The three errors mentioned above were;

Blunders – sometimes called personal errors are mistakes. A misreading, writing down the wrong number, entering in the wrong HI in the data collector, setting over the wrong point. Everyone who has surveyed has made a blunder and everyone who has made a blunder understands how to correct for them. Blunders are corrected by re-measurement. This is the only proper way to correct a blunder.

Systematic – sometimes called instrumental errors, are errors that, based on knowledge of the equipment and procedures being used, the direction and magnitude of the error can be approximately determined. The ACSM “Definitions of Surveying and Associated Terms” defines systematic errors as “An error whose algebraic sign and, to some extent, magnitude bear a fixed relation to some condition or set of conditions. They always follow some definite mathematical or physical law, and the are generally eliminated from a series of observations by computation or by systematic field methods. Also called regular error”

Systematic errors can be errors caused by things such as minor mis-alignment of equipment and change in temperature, Systematic errors will not show themselves in precision calculations. This means that ignoring systematic errors may still possibly indicate high precision, but actually be low accuracy!

Random – These are the errors we cannot avoid. Error theory tells us that random errors tend to be small and can be in any direction. Under normal measurement processes, Random errors are eliminated by averaging or taking the “Most Probable Value” of a measurement.

Random errors are caused by humans not being a perfect being. Estimating rod readings, sighting on targets, setting up over a point or leveling can all cause minor errors in the measurement.

Errors tend to propagate, which means that a small error in one place, if ignored can generate larger errors later on. Say you backsight a target that was 200 feet away and you mis-sight it by 0° 00’ 10” with your total station, that means you are only off center by 0.01 foot. You turn to a foresight 600 feet away, creating 0.03 foot error in position. You continue on for 1 mile further, now creating an error of 0.29 feet; just for that one minor mis-sight.

Somehow we need to understand what error we might have for any given measurement or any given position, so that we have some understanding of the quality of our measurement. Everyone who has worked in surveying for a while will run a traverse or level loop, which closes perfectly, which would lead you to assume for that survey there are no errors. Yet when you go back and use those points, you do not get exact same values. This is why we need to really understand what the quality of our measurements.

Statistically Speaking

BellCurve

Statistics is the discipline within mathematics that deals with error probability, or “how well can we estimate how good our measurement is”. There are two primary ways to generate good statistical values. The first, and best, is to take many repeat measurements. This would be measuring the distance between two points 1000 or 2000 times. Not only would you measure the distance that many times, you would need to re-set up over each point before each measurement, and note all the atmospheric conditions at the time of each measurement. The Latin’s called this a posteriori (derived from facts).  Because this is not feasible, if we want to get paid for our work, in surveying, as well as many other disciplines, we use what is often called a priori, which means that we come up statistics based on some previous knowledge. The previous knowledge is based on standardization of our equipment and procedures. The standardization gives us what the anticipated error is going to be under certain conditions.

The most basic way we express the quality of our measurement is the precision we show as our final answer. A distance of 1234.56 feet would indicate that we did not pace that distance, or use a distance wheel, but that we measured that distance with a method that gave us an accuracy of ±0.005 feet. This is the theory of “Significant Figure” which says that measurements should be expressed to the first digit of estimation or first doubtful figure. The reality is that our general procedures for precise distance measurement in surveying, does not give us an accuracy of 0.005 feet but it is better than 0.05 feet, therefore our first doubtful figure would be 0.01 feet.  Along with this theory come a number of rules for determining the “Least Significant Figure” in calculations.

Significant Figures

There are a number of chapters in surveying and error theory books dealing with how to determine the least significant figure of a calculation, or to what digit should you show the final answer. It is important to remember, that the rules are not absolute, but are methods for a quick determination of the significance of the answer. You can directly determine what the significance of a calculation is if you are ever uncertain. A simple example is that you measured a rectangular parcel as 123.45 feet by 234.56 feet. You are indicating that you are measuring to ±0.005 feet, but lets say 0.01 foot for this example. The area of the parcel would then be 123.45 x 234.56 =28,956.43 sq. ft., but to what significance. If we say we were 0.01 foot short on both measurements then the area would be 123.46 x 234.57 = 28,960.01 sq. feet. So we can see that the change in area is roughly 4 sq ft. So it would be most appropriate to show the area to the nearest 10 sq. ft.

The basic rules in determining the “Least Significant Figure” are;

Significant figures only relate to measurements, so if you are converting measurements using constants (1 acre = 43,560 sq ft, 1 meter = 3.280833333sFt) the constant has no impact on the significance.

When adding, the least significant figure of the final answer is determined by the position of the least significant value in the measurements. If you measure a long line with a steel tape (They still exist), so that you have a series of distances 100.23 feet, 100.27 feet, 100.48 feet and 100.52 feet and then you break the tape so you pace the last distance of 55 feet. It would not be appropriate to show the answer as 456.50 feet, but instead 456 feet, assuming you can pace to the nearest foot.

When multiplying, you don’t look at the position but you look at the count. If we look at the area calculation previously done, each side had 5 significant figures (123.45 and 234.56). so the answer should be significant to the fifth position from the left 28,956 feet. As we saw, that is indicating a little more precision than we actually have, but it is close.

The significance of change in an equation’s variable can be tested by changing the value of a variable and observing  the change in the result. A zenith angle of  72°30’ and a slope distance of 425.57 feet are measured and the horizontal distance is determined to be 405.87 feet.  By changing the zenith angle to 72°31’, the horizontal distance changes to 405.84 feet.  If the zenith angle were larger, the significance of the angle change to the horizontal distance would be less significant.

As mentioned before, there is much more to determining significant figures but it is only an estimation of the significance.

STATISTICAL TERMS

Statistics allows us to be more precise in how we express our statistical precision. In order to discuss how we express our error we must first come up with some common terms we can use.

ERROR – The difference from our measurement to the True Value. Since we can not determine the true value, we can not determine the true error of our measurement

MEAN – The average value of a series of measurements.

RESIDUAL – This is the difference between our measurement and the most probable value.  Generally our most probable value is the mean. It is generally expressed using a lower case v ( v) .

NORMAL DISTRIBUTION CURVE – This is a plot of truly random errors of a large population set of a single measurement. It is this plot that statistics uses to determine what the probable error is.

STANDARD DEVIATION – This is in general terms is an expression of the precision of any one measurement of a series of measurements. It is often  called the mean square error and is commonly expressed using the Greek letter Sigma (σ ) and so sometimes it is called 1-sigma. Under the normal distribution curve 68.3 % of the measurements will fall within ± the standard deviation. The formula for the Standard Deviation is:

\large \sigma_s = \sqrt{\frac{\sum v^2}{n-1}}  

where n = number of measurements

STANDARD ERROR OF THE MEAN – This is really the standard deviation of the mean or of the whole set of measurements. Think of it this way; if two surveyors went out and measured between two points; surveyor 1 had a distance of 12,345.56 feet with a standard deviation of 0.02’ and surveyor 2 had a distance of 12,345.65 feet with a standard deviation of 0.02’, which answer would be better. Well if surveyor 1 measured the distance twice and surveyor 2 measured the distance 10 times, it would seem more appropriate to use surveyor 2’s measurement. This is what the Standard error of the mean gives us. The formula for the Standard Error of the Mean is

\large\sigma_m = \frac{\sigma_s}{\sqrt{n}}

Using Statistics in Surveying

Based on the definition of the Standard Deviation, if we measured a distance 100 times and had a standard deviation of 0.05 feet about 32 of those measurements would fall outside of ± 0.05 feet  In surveying saying our precision is 0.05 feet generally conveys a higher confidence in our measurement. So often we express our precision by doubling the standard deviation. If we said our measurement is ± 0.10 feet the normal distribution curve would say that 95.5% of our measurements would fall within that realm. Doubling the standard deviation is often called 2-Sigma (2σ). Sometimes it is referred to as the 95% error. Of course you can increase or decrease the confidence level, but 2-sigma is good confidence to be for general surveying practices.

Again in surveying we generally do not take enough measurements to create significant statistics, so we rely on procedures and equipment statistics to provide us with the confidence level in our measurement. The important issue in land surveying is dealing with the propagation of errors in our measurements. Say we have a total station that has the following specifications for it’s laser EDM: Accuracy (2-sigma) of 5mm + 2ppm. In performing a large open traverse we obtain the following distances;

12,457.68 feet ±0.041 ft
8,256.34 feet ±0.033 ft
25,897.89 feet ±0.068 ft
17,895.11 feet ±0.052 ft
1,232.27 feet ±0.019 ft

The precisions shown are calculated by converting 5 mm to feet (0.016), then adding the 2 parts per million, by dividing the distance by 500,000 and adding the two numbers. It is important to understand in this example, we are ignoring all the other random errors of setting over a point, sighting and such.

We want to know what the length of our traverse is and how certain we are of that length. The length is determined by adding the distances which equals 65,739.29 feet. To determine the 2-sigma precision, the rules of statistics would tell us that we take the square root of the sum of the squares of the errors. The formula would look like this;

\large2\sigma_{Sum} = \pm\sqrt{0.041^2+0.033^2+0.068^2+0.052^2+0.019^2}=0.102

Note the sum of the errors is 0.213 feet, but our value takes into account that these are random values, and can be positive as well as negative. So our final value should look like;

65,739.29 feet ±0.10 feet

The calculation of precision when you are multiplying precisions requires you to determine the effect of one error on the other measurement. The general formula is;

\large2\sigma_{Product} = \pm\sqrt{(2\sigma_A \times B)^2 + (2\sigma_B \times A)^2}

If we look at the previous area of a parcel problem we did, adding in some 2-sigma precisions (made up) ; we have a distance of 123.45 ± 0.015 feet and 234.56 ± 0.020 feet. The area still is 123.45 x 234.56 =28,956.43 sq. ft. the 2-sigma error would be

\large 2\sigma_{Product} = \pm\sqrt{(0.015 \times 234.56)^2 + (0.020 \times123.45)^2} = 4.298

Therefore our answer would be 28,956.43 ± 4.30 square feet

In conventional surveying, we determine positions by measuring the angle and distance between points. What we are looking for is not so much the precision in our distance and direction, but more in our position. Positions are determined by adding latitude and departures to existing coordinate values. Therefore if we want to determine the precision of a position we must look at the precision of the latitude and departure. If we had a direction that was going almost due east, it is easy to see that the error in the departure(change in easting) would be primarily dependent on the distance, while the error of the latitude would be primarily dependent on the direction. To show this lets use our previous total station that has an EDM with an accuracy (2-sigma) of 5mm + 2ppm and a angular accuracy (2-sigma) of 7 seconds. We set up on one point and determine the following distances and directions.

3456.78 feet @ Azimuth of 89° 30’ 45”

3456.78 feet @ Azimuth of 23° 30’ 45”

First, determine the distance and angular accuracy for a line with a distance of 2456.78. The distance accuracy is 5mm+ 2ppm which comes out to be 0.023 feet To express the angular accuracy of 7 seconds as a distance, multiply the distance by the tangent of 7 seconds. 3456.78 x tan(00° 00’ 07”) = ± 0.117 feet.

The latitude of the lines are determined using the following formula

Latitude = Distance \times \cos(Azimuth)
Latitude = 3456.78 \times \cos(89^\circ 30' 45) = 29.41
Latitude = 3456.78 \times \cos(23^\circ 30' 45) = 3169.77

The Departures of the lines are determined using the following formula

Departure = Distance \times \sin(Azimuth)
Departure = 3456.78 \times \sin(89^\circ 30' 45) = 3456.65
Departure = 3456.78 \times \sin(23^\circ 30' 45) = 1379.08

The confidence in the latitude will approximately equal

2\sigma_{Latitude} = \sqrt{(\sin(Az) \times 2\sigma_{angle})^2 + (\cos(Az) \times 2\sigma_{dist})^2}
2\sigma_{Latitude} = \sqrt{(\sin(89°30'45")\times 0.117)^2 + (\cos(89°30'45") \times 0.023)^2} = 0.117
2\sigma_{Latitude} = \sqrt{(sin(23°30'45")\times 0.117)^2 + (\cos(23°30'45") \times 0.023)^2} = 0.051

The departure confidence will approximately equal

2\sigma_{Departure} = \sqrt{(\cos(Az)x 2\sigma_angle)^2 + (\sin(Az) x 2\sigma_dist)^2}
2\sigma_{Departure} = \sqrt{(\cos(89°30'45")x 0.117)^2 + (\sin(89°30'45") x 0.023)^2} = 0.117
2\sigma_{Departure} = \sqrt{(\cos(23°30'45")x 0.117)^2 + (\sin(23°30'45") x 0.023)^2} = 0.051

In the above example, again we ignored all the other random errors associated with the measurements. It is clear trying to resolve our position error for all our measurements would be tedious and blunder prone. We also make additional redundant measurements in order to “strengthen” our measurements. We have a number of simple adjustments that we can apply to our measurements, in order to adjust for our errors, but the most mathematically rigorous is “Least Squares” adjustment, which mathematically determines a solution that provides us with the smallest sum of squares of the residuals of all our measurements. The process of a least square adjustment requires an understanding of matrix mathematics as well as statistics. It is beyond this short discussion on error theory, as well as Land Surveyor Exam to cover the concepts and process of least squares adjustment.

There are a couple of concepts in least squares adjustments that should be discussed. The first is weighting measurements. Weighting measurements is nothing more than applying a factor to a measurement based on the quality of the measurement. On a long traverse locating some of the points with static GPS besides conventional means would provide multiple measurements to those points. Current practices would indicate that the static GPS is more precise than the conventional survey methods on long traverses. Therefore we would want the positions determined by GPS to have more influence or “weight” in the final solution. Weighting is giving relative value to measurements. Often, but it is not required, a weight of 1 is given to all standard measurements, then other weights will either be a little over or under 1 when they vary from standard. An angle measurement with short sights would have less weight than an angle with long sights.

The other concept that occurs in least squares is error ellipses. An error ellipse is an ellipse created around a point to show the positional error. It is an ellipse, because the error in the northing does not necessarily match the error in easting. Looking at the previous example of determining the errors of the latitude and departures, the error ellipse would be an ellipse with the major axis being the azimuth of the line, the major radius being the 2-sigma of the distance (0.023) and the minor radius being the 2-sigma of the angle (0.117 feet).

Questions

Once you have tried answering the questions you can go HERE to see what answers I came up with

Question 1

6 different crews were assigned to measure a portion of a single township line. All the section corners were found along the line and it was determined that the corners were all within 0° 00’ 05” of being on a straight line. Each crew performed a series of measurements for their portion of the line. They all used the same equipment and provided the office the following information;

CrewMean DistanceStandard Deviation
15280.12’±0.05’
25279.58’±0.08’
35281.07′±0.12’
45280.79’±0.02’
55282.03′±0.15’
65279.68′±0.03′

Determine the most probable value for the distance of the township line, and the 2-sigma probability for the distance.

Question 2

For a triangulation survey, 12 measurements were taken between point “A” and Point “B”, for clarity only the seconds are listed. They are 52.4”, 52.8”, 51.6”, 51.2”, 50.6”, 52.7”, 49.8”, 50.3”, 51.8”, 50.0”, 53.0” and 51.7”. The FGCC standards for second order class II triangulation survey requires the standard deviation of the mean to be within 0.8”. Determine if this angle meets that requirement.

Question 3

The area of a rectangular field is determined using a 100 foot steel tape that has an accuracy of 0.03’ (2-sigma). One side of the field is measured as 2304.56 feet, and the other side is 996.32 feet. Assuming it is a perfect rectangle, determine the most probable value for the area and the values error (2-sigma).


Leave a comment