Capgo datalogger, data logger, datalogging and data logging.

  

Measurement Theory

 
 

Introduction to measurement theory

Measurement is the process of associating numbers with physical quantities and phenomena. The process is accomplished through the comparison of a measured value with some known quantity (standard) of the same kind. The subject has become of vital importance in sciences, engineering and to much everyday activity.

While measurement theory began with the Greeks in the 4th century BC, the first useful work appeared in the 18th century by English mathematician Thomas Simpson on observation error - perhaps the most important single aspect of measurement theory.

Standards

The acceptance of measurement standards is vital if the measurements are to be meaningful to others. The first known widely used standard was the Egyptian cubit. It was the standard of length (0.4511meters) that came into use around 3000 BC and was maintained as a length of black granite This was effectively the primary standard against which other cubit sticks (secondary standards) were regularly compared. The definition of the cubit changed from time to time as a result of political and national issues.

In principal the cubit standard was similar to the meter standard up until 1960. The meter was adopted by France in 1795, and for a long time was a length of indium rod. From 1960 the meter was redefined in terms of the wavelength of krypton light (the meter was set to be 1,650,763.73 wavelengths in vacuum of the orange-red line of the spectrum of krypton-86).

Systeme International

Similarly other standards have evolved, with the Systeme International (SI) now having by far the widest acceptance, especially amongst the scientific community. Interestingly the unit of mass - the kilogram - is the only basic standard that is now represented by a physical object. An international effort is underway to to establish a new and absolute definition of the kilogram in terms of a number of silicon atoms. Physically, a very precisely manufactured sphere of pure single crystal silicon (as used in the microelectronics industry) will become the practical standard, but it will be defined in terms of the number silicon atoms it contains. Known as the "Avogadro Project", the aim is to be able to characterizing the silicon density with an error less than 20 parts per billion.

The advantage in linking the standards to fundamental atomic characteristics is they may be reproduced from a communicated description without the need to transport physical objects. The atomic characteristics are believed to be the same across the universe, so in theory the standards could be passed to other a civilization in another galaxy by a radio message.

The six basic units of SI

Six basic units are define under Système International (1960). All other units are derived mathematically from these units.

Basic Units Unit Symbol Dimension Definition
length
meter
m
L
Equal to 1,650,763.73 wavelengths in vacuum of the orange-red line of the krypton-86 spectra
mass
kilogram
kg
M
Cylinder of platinum-iridium alloy kept in France and a number of copies. May be replaced by an atomic standard within the next ten years.
time
second
s
T
Time for 9,192,631,770 cycles of resonance vibration of the caesium-133 atom.
temperature
kelvin
K
K
Absolute zero is defined as 0 kelvin and the triple point of water as 273.16 kelvins. See ITS-90 (115k) for how the scale is interpolated and extended.
luminosity
candela
C
C
Intensity of a light source (frequency 5.40x1014 Hz) that gives a radiant intensity of 1/683 watts / steradian in a given direction.
electric current
ampere
A
I
Current that produces a force of 2.10-7 Newtons per meter between a pair of infinitely long parallel wires 1 meter apart in a vacuum.
amount of substance
mole
mol
.
Number of elementary entities of a substance equal to the number of atoms in 0.012 kg of carbon 12.

Two supplementary units of SI

In addition to the six basic SI units, two supplementary units have been defined. These are both associated with angle.

Supplementary
Units

Unit

Symbol

Dimension

Definition

angle

radian

rad

.

The angle subtended at the center of a circle by an arc that is of the same length as the radius.

solid angle

steradian

sr

.

The solid angle subtended at the center of a sphere by an area on its surface equal to the square of its radius

Derived standards

A selection of other units derived from basic SI units are included in the table below. There are 17 units with internationally accepted special names and many others that employ names based on their basic units.

Derived Standard

Unit

Symbol

Dimension

Definition

acceleration

meter /s/s

m.s-2

ML-2

Rate of change of velocity of 1 meter per 1 second per one second

area

square meter

m2

M2

Multiplication of two orthogonal (right-angle) lengths in meters

volume

cubic meter

m3

M3

Multiplication of three mutually orthogonal (right-angle) lengths in meters

force

Newton

N

MLT-2

The force required to accelerate a 1 kilogram mass 1 meter / second / second

charge

Coulomb

C

IT

Quantity of electricity carried by a current of 1 ampere for 1 second

energy

Joule

J

ML2T-2

Work done by a force of 1 Newton moving through a distance of 1 meter in the direction of the force.

power

Watt

W

ML2T-3

Energy expenditure at a rate of 1 Joule per 1 second

resistance

Ohm

½

ML2T-3I-2

Resistance that produces a 1 volt drop with a 1 ampere current

frequency

Hertz

Hz

T-1

Number of cycles in 1 second

pressure

Pascal

Pa

ML-1T-2

Pressure due a a force of 1 Newton applied over an area of 1 square meter

velocity

meter / s

m.s-1

LT-1

Rate of movement in a direction of 1 meter in 1 second

potential (emf)

Volt

V

ML2T-3I-1

The potential when 1 Joule of work is done in making 1 coulomb of electricity flow

Measurement error

Practically all measurement of continuums involve errors. Understanding the nature and source of these errors can help in reducing their impact and in may instances prevent the drawing of incorrect conclusions.

In earlier times it was thought that errors in measurement could be eliminated by improvements in technique and equipment, however most scientists now accept this is not the case. Today, nearly all scientific and engineering results are routinely reported with likely error bounds. The types of errors that must be understood include instrumental errors, systematic errors, random errors, sampling errors and indirect errors.

Systematic error

An error that can be predicted and hence eventually removed from data is a systematic error. Systematic errors may change with time, so it is important that sufficient reference data be collected with the data set to allow the systematic errors to be quantified and subtracted from the data set:

Instrumental errors

Examples of measurement equipment systematic errors include calibration errors, input zero drift and gain drift. Measuring equipment can also induce nonsystematic errors. Instrument errors are considered in more detail in the Measurement Methods pages.

Sensor placement errors

An often overlooked systematic error source is associated with the location of the sensor. Errors can be caused by measured parameter gradients or the impact of other parameters on the sensor. For example, in precision air temperature measurement, it is likely that temperature gradients exist or radiant energy be heating the sensor - so just what is being measured? Also radiant heat may heat the sensor directly giving an erroneous reading.

Indirect errors

These are associated with calibration and conversions. Generally these errors are small, but can become significant with some types of measurement for example light intensity.

Nonsystematic errors

A nonsystematic error is one that cannot be predicted due to a randomness in its nature. Nonsystematic errors limit the ultimate accuracy of a measurement process by a masking effect that leads to information loss. They include:

Quantizing error

All measuring equipment has a resolution limit, input variations below which can not be detected or measured, leading to unrecoverable information loss. In systems with evenly spaced quantization boundaries, quantization errors can be reduced by adding noise to the input and averaging many samples.

Rounding and truncation errors

In processing the measuring system's readings, the precision of calculation (number of significant digits) can compromise results.

Sampling errors

The frequency and sample window time can impact accuracy, especially for changing or noisy quantity.

Random or noise errors

Random noise is always present in measurement, and sometimes is the dominant source of error. Depending on the noise spectrum, the noise error can generally be reduced by averaging many readings.

Sensor cross sensitivity errors

Few measuring systems respond only to the parameter being measured. All sensors have a degree of sensitivity to other parameters. For example a temperature sensor's output may change with pressure, humidity and/or ionizing radiation.

Dimensions

Dimensions can be a useful means of analyzing measurement units. A full description is beyond the scope of this document, although it will be added at a later date.