amount of substances

Upload: aida-jun

Post on 14-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Amount of Substances

    1/19

    AMOUNT OF SUBSTANCES

    Definition:

    Amount of substance is a standards-defined quantity that measures the size of an ensemble of

    elementary entities, such as atoms, molecules, electrons, and other particles. It is sometimes

    referred to as chemical amount. The International System of Units (SI) defines the amount of

    substance to be proportional to the number of elementary entities present. The SI unit foramount of substance is the mole. It has the unit symbol mol. The mole is defined as the amount

    of substance that contains an equal number of elementary entities as there are atoms in 12g of

    the isotope carbon-12. This number is called Avogadro's number and has the value

    6.02214179(30)1023.[2] It is the numerical value of the Avogadro constant which has the unit

    1/mol, and relates the molar mass of an amount of substance to its mass. Amount of substance

    appears in thermodynamic relations such as the ideal gas law, and in stoichiometric relations

    between reacting molecules as in the law of multiple proportions. The only other unit of

    amount of substance in current use is the pound-mole with the symbol lb-mol, which is

    sometimes used in chemical engineering in the United States.[3][4] One pound-mole is exactly

    453.59237 mol.

    Historical Development:

    The alchemists, and especially the early metallurgists, probably had some notion of amount of

    substance, but there are no surviving records of any generalization of the idea beyond a set of

    recipes. In 1758, Mikhail Lomonosov questioned the idea that mass was the only measure of

    the quantity of matter, but he did so only in relation to his theories on gravitation. The

    development of the concept of amount of substance was coincidental with, and vital to, the

    birth of modern chemistry.

    1777: Wenzel proposed Lessons on Affinity, in which he demonstrates the proportions of the

    "base component" and the "acid component" (cation and anion in modern terminology) remain

    the same during reactions between two neutral salts.

    1789: Lavoisier publishes Treatise of Elementary Chemistry, proposing the concept of a

    chemical element and clarifying the Law of conservation of mass for chemical reactions.

    1792: Richter publishes the first volume of Stoichiometry or the Art of Measuring the Chemical

    Elements (publication of subsequent volumes continues until 1802). The term "stoichiometry"

    used for the first time. The first tables of equivalent weights are published for acidbase

    reactions. Richter also notes that, for a given acid, the equivalent mass of the acid is

    proportional to the mass of oxygen in the base.

    1794: Proust's Law of definite proportions generalizes the concept of equivalent weights to all

    types of chemical reaction, not simply acidbase reactions.

  • 7/30/2019 Amount of Substances

    2/19

    1805: Dalton publishes his first paper on modern atomic theory, including a "Table of the

    relative weights of the ultimate particles of gaseous and other bodies".

    The concept of atoms raised the question of their weight. While many were skeptical about the

    reality of atoms, chemists quickly found atomic weights to be an invaluable tool in expressing

    stoichiometric relationships.

    1808: Publication of Dalton's A New System of Chemical Philosophy, containing the first table ofatomic weights (based on H = 1).

    1809: Gay-Lussac's Law of combining volumes, stating an integer relationship between the

    volumes of reactants and products in the chemical reactions of gases.

    1811: Avogadro hypothesizes that equal volumes of different gases contain equal numbers of

    particles, now known as Avogadro's law.

    1813/1814: Berzelius publishes the first of several tables of atomic weights based on the scale

    of O = 100.

    1815: Prout publishes his hypothesis that all atomic weights are integer multiple of the atomic

    weight of hydrogen. The hypothesis is later abandoned given the observed atomic weight of

    chlorine (approx. 35.5 relative to hydrogen).

    1819: DulongPetit law relating the atomic weight of a solid element to its specific heat

    capacity.

    1819: Mitscherlich's work on crystal isomorphism allows many chemical formulae to be

    clarified, resolving several ambiguities in the calculation of atomic weights.

    1834: Clapeyron states the ideal gas law.

    The ideal gas law was the first to be discovered of many relationships between the number of

    atoms or molecules in a system and other physical properties of the system, apart from its

    mass. However, this was not sufficient to convince all scientists of the existence of atoms and

    molecules, many considered it simply being a useful tool for calculation.

    1834: Faraday states his Laws of electrolysis, in particular that "the chemical decomposing

    action of a current is constant for a constant quantity of electricity".

    1856: Krnig derives the ideal gas law from kinetic theory. Clausius publishes an independent

    derivation the following year.

    1860: The Karlsruhe Congress debates the relation between "physical molecules", "chemical

    molecules" and atoms, without reaching consensus.

  • 7/30/2019 Amount of Substances

    3/19

    1865: Loschmidt makes the first estimate of the size of gas molecules and hence of number of

    molecules in a given volume of gas, now known as the Loschmidt constant.

    1886: Van't Hoff demonstrates the similarities in behaviour between dilute solutions and ideal

    gases.

    1886: Eugen Goldstein observed discrete particle rays in gas discharges that laid the foundation

    of mass spectrometry, a tool later used to establish the masses of atoms and molecules.

    1887: Arrhenius describes the dissociation of electrolyte in solution, resolving one of the

    problems in the study of colligative properties.

    1893: First recorded use of the term mole to describe a unit of amount of substance by Ostwald

    in a university textbook.

    1897: First recorded use of the term mole in English.

    By the turn of the twentieth century, the concept of atomic and molecular entities was

    generally accepted, but many questions remained, not least the size of atoms and their numberin a given sample. The concurrent development of mass spectrometry, starting in 1886,

    supported the concept of atomic and molecular mass and provided a tool of direct relative

    measurement.

    1905: Einstein's paper on Brownian motion dispels any last doubts on the physical reality of

    atoms, and opens the way for an accurate determination of their mass.

    1909: Perrin coins the name Avogadro constant and estimates its value.

    1913: Discovery of isotopes of non-radioactive elements by Soddy and Thomson.

    1914: Richards receives the Nobel Prize in Chemistry for "his determinations of the atomic

    weight of a large number of elements".

    1920: Aston proposes the whole number rule, an updated version of Prout's hypothesis.

    1921: Soddy receives the Nobel Prize in Chemistry "for his work on the chemistry of radioactive

    substances and investigations into isotopes".

    1922: Aston receives the Nobel Prize in Chemistry "for his discovery of isotopes in a large

    number of non-radioactive elements, and for his whole-number rule".

    1926: Perrin receives the Nobel Prize in Physics, in part for his work in measuring Avogadro's

    constant.

    1959/1960: Unified atomic weight scale based on 12C = 12 adopted by IUPAP and IUPAC.

  • 7/30/2019 Amount of Substances

    4/19

    1968: The mole is recommended for inclusion in the International System of Units (SI) by the

    International Committee for Weights and Measures (CIPM).

    1972: The mole is approved as the SI base unit of amount of substance.

    Quantity Type:

    Amount of substances has base quantity as its unit. It appears in thermodynamic relations, such

    as the ideal gas law, and in stoichiometric relations between reacting molecules, as in the Law

    of Multiple Proportions. Familiar equations involving n are thus

    pV = nRT (1)

    for an ideal gas, and the equation c = n/V (2)

    for the amount-of-substance concentration (usually called simply the concentration, the molar

    concentration, or the amount concentration) of a solution. Here, V is the volume of a solution

    containing the amount of solute n. Another important relation is that between amount of

    substance n and mass m for a pure samplen = m/M (3)

    where M is the mass per amount of substance, usually called the molar mass. Similarly,

    amount concentration c (SI unit mol/dm3) may be related to mass concentration (SI unit

    g/dm3) by the equationc = /M (4)

    An important application of the quantity n in chemistry is to the way in which molecules react

    in a titration (or more generally in any chemical reaction); molecules (or ions or entities) of X

    always react with molecules (or ions or entities) of Y in a simple ratio, such as 1:1, or 1:2, or 2:1,

    and so forth. This is the most fundamental concept of chemical reactions.

  • 7/30/2019 Amount of Substances

    5/19

    Another unit for amount of substances

    Another base quantity.

    Dimension of the Quantity:

    All other quantities are derived quantities, which may be written in terms of the base quantities

    by the equations of physics. The dimensions of the derived quantities are written as products of

    powers of the dimensions of the base quantities using the equations that relate the derived

    quantities to the base quantities. In general the dimension of any quantity Q is written in the

    form of a dimensional product,

    dim Q = L M T l N J

    where the exponents and , which are generally small integers which can

    be positive, negative or zero, are called the dimensional exponents. The dimension of a derived

  • 7/30/2019 Amount of Substances

    6/19

    quantity provides the same information about the relation of that quantity to the base

    quantities as is provided by the SI unit of the derived quantity as a product of powers of the SI

    base units.

    Primary, Secondary and Working Standards/References Material:

    Instrument Calibration Chain

    The calibration facilities provided within the instrumentation department of a company provide

    the first link in the calibration chain. Instruments used for calibration at this level are known as

    working standards. As such working standard instruments are kept by the instrumentation

    department of a company solely for calibration duties, and for no other purpose, then it can be

    assumed that they will maintain their accuracy over a reasonable period of time because use-

    related deterioration in accuracy is largely eliminated. However, over the longer term, the

    characteristics of even such standard instruments will drift, mainly due to ageing effects in

    components within them. Therefore, over this longer term, a programs must be instituted for

    calibrating working standard instruments at appropriate intervals of time against instruments

    of yet higher accuracy. The instrument used for calibrating working standard instruments is

    known as a secondary reference standard. This must obviously be a very well-engineered

    instrument that gives high accuracy and is stabilized against drift in its performance with time.

    This implies that it will be an expensive instrument to buy. It also requires that the

    environmental conditions in which it is used be carefully controlled in respect of ambient

    temperature, humidity etc.

    - the identification of the equipment calibrated- the calibration results obtained- the measurement uncertainty- any use limitations on the equipment calibrated

  • 7/30/2019 Amount of Substances

    7/19

    - the date of calibration- the authority under which the certificate is issued.

    The establishment of a company Standards Laboratory to provide a calibration facility

    of the required quality is economically viable only in the case of very large companies

    where large numbers of instruments need to be calibrated across several factories. In

    the case of small to medium size companies, the cost of buying and maintaining such

    equipment is not justified. Instead, they would normally use the calibration service

    provided by various companies that specialize in offering a Standards Laboratory.

    What these specialist calibration companies effectively do is to share out the high cost

    of providing this highly accurate but infrequently used calibration service over a large

    number of companies. Such Standards Laboratories are closely monitored by National

    Standards Organizations.

    Primary reference standards describe the highest level of accuracy that is achievable in the

    measurement of any particular physical quantity. All items of equipment used in Standards

    Laboratories as secondary reference standards have to be calibrated themselves against

    primary reference standards at appropriate intervals of time. This procedure is acknowledged

    by the issue of a calibration certificate in the standard way. National Standards Organizationsmaintain suitable facilities for this calibration, which in the case of the United Kingdom are at

    the National Physical Laboratory. The equivalent National Standards Organization in the United

    States of America is the National Bureau of Standards. In certain cases, such primary reference

    standards can be located outside National Standards Organizations. For instance, the primary

    reference standard for dimension measurement is defined by the wavelength of the orange

    red line of krypton light, and it can therefore be realized in any laboratory equipped with an

    interferometer. In certain cases (e.g. the measurement of viscosity), such primary reference

    standards are not available and reference standards for calibration are achieved by

    collaboration between several National Standards Organizations who perform measurements

    on identical samples under controlled conditions (ISO 5725, 1998).

    Linearity

    It is normally desirable that the output reading of an instrument is linearly proportional to the

    quantity being measured. The Xs marked show a plot of the typical output readings of an

    instrument when a sequence of input quantities are applied to it. Normal procedure is to draw

    a good fit straight line through the Xs, as shown in Figure 2.6. (Whilst this can often be done

    with reasonable accuracy by eye, it is always preferable to apply a mathematical least-squares

    line-fitting technique, as described in

    Chapter 11.) The non-linearity is then defined as the maximum deviation of any of the output

    readings marked X from this straight line. Non-linearity is usually expressed as a percentage of

    full-scale reading.

    Hysteresis

    The output characteristic of an instrument that exhibits hysteresis are shown below. If the input

    measured quantity to the instrument is steadily increased from a negative value, the output

  • 7/30/2019 Amount of Substances

    8/19

    reading varies in the manner shown in curve (a). If the input variable is then steadily decreased,

    the output varies in the manner shown in curve (b). The non-coincidence between these

    loading and unloading curves is known as hysteresis. Two quantities are defined, maximum

    input hysteresis and maximum output hysteresis. These are normally expressed as a percentage

    of the full-scale input or output reading respectively.

    Instruments characteristics with hysteresis.

    Sensitivity of measurement

    The sensitivity of measurement is a measure of the change in instrument output that

    occurs when the quantity being measured changes by a given amount. Thus, sensitivity

    is the ratio:

    scale deflection

    value of measurand producing deflection

    The sensitivity of measurement is therefore the slope of the straight line drawn. If, for example,

    a pressure of 2 bar produces a deflection of 10 degrees in a pressure transducer, the sensitivity

    of the instrument is 5 degrees/bar (assuming that the deflection is zero with zero pressure

    applied).

  • 7/30/2019 Amount of Substances

    9/19

    Instrument output characteristics

    Resolution

    When an instrument is showing a particular output reading, there is a lower limit on the

    magnitude of the change in the input measured quantity that produces an observable

    change in the instrument output. Like threshold, resolution is sometimes specified as an

    absolute value and sometimes as a percentage of f.s. deflection. One of the major factors

    influencing the resolution of an instrument is how finely its output scale is divided into

    subdivisions. Using a car speedometer as an example again, this has subdivisions of

    typically 20 km/h. This means that when the needle is between the scale markings,

    we cannot estimate speed more accurately than to the nearest 5 km/h. This figure of5 km/h thus represents the resolution of the instrument.

    Accuracy and inaccuracy (measurement uncertainty)

    The accuracy of an instrument is a measure of how close the output reading of the instrument

    is to the correct value. In practice, it is more usual to quote the inaccuracy figure rather than

    the accuracy figure for an instrument. Inaccuracy is the extent to which a reading might be

    wrong, and is often quoted as a percentage of the full-scale

    (f.s.) reading of an instrument. If, for example, a pressure gauge of range 0 10 bar has a

    quoted inaccuracy of1.0% f.s. (1% of full-scale reading), then the maximum error to be

    expected in any reading is 0.1 bar. This means that when the instrument is reading 1.0 bar, the

    possible error is 10% of this value. For this reason, it is an important system design rule that

    instruments are chosen such that their range is appropriate to the spread of values being

    measured, in order that the best possible accuracy is maintained in instrument readings. Thus,

    if we were measuring pressures with expected values between 0 and 1 bar, we would not use

  • 7/30/2019 Amount of Substances

    10/19

    an instrument with a range of 0 10 bar. The term measurement uncertainty is frequently used

    in place of inaccuracy.

    Traceability to national standards

    Measurement today is more valuable than ever. We depend on measurement for almost

    everything - from time keeping to weather forecasts, from DIY work at home to heavy-duty

    manufacturing, industrial research and medical science.

    Since measurement plays such a fundamental part in our lives, it is important that the accuracy

    of the measurement is fit for purpose, i.e. it fully meets the requirements of the application.

    Every measurement is inexact and therefore requires a statement of uncertainty to quantify

    that inexactness. The uncertainty of a measurement is the doubt that exists about the result of

    any measurement.

    One way of ensuring that your measurements are accurate is by tracing them back to national

    standards. This method of guaranteeing a measurement's accuracy through an unbroken chain

    of reference is called traceability.

    Accurate Measurement

    Accurate measurement enables us to:

    Maintain quality control during production processes Comply with and enforce laws and regulations Undertake research and development Calibrate instruments and achieve traceability to a national measurement standard Develop, maintain and compare national and international measurement standards

    Successful measurement depends on the following:

    Accurate instruments Traceability to national standards An understanding of uncertainty Application of good measurement practice

  • 7/30/2019 Amount of Substances

    11/19

    There are many factors that can cause inaccuracy:

    Environmental effects Inferior measuring equipment Poor measuring techniques

    Measuring devices :

    1. Mass spectrometer

    Mass Spectrometer

    The mass spectrometer is an instrument that can measure the masses and relative

    concentrations of atoms and molecules. The design of a mass spectrometer has three essential

    modules , an ion source which transforms the molecules in a sample into ionized fragments , a

    mass analyzer which sorts the ions by their masses by applying electric , and magnetic fields

    and a detector which measures the value of some indicator quantity and thus provides data for

    calculating the abundances each ion fragment present.

    The mass spectrometer technique has both qualitative and quantitative uses , such as:

    1. Identification of unknown compounds.

    2. Determining the isotopic composition of elements in a compound.

    3. Determining the structure of a compound by observing its fragmentation.

    4. Quantifying the amount of a compound in a sample using carefully designed methods.

    5. Studying the fundamentals of gas phase in ion chemistry.

    6. Determining other physical , chemical or biological properties or compounds .

  • 7/30/2019 Amount of Substances

    12/19

    Mass spectrometers are sensitive detectors of isotopes based on their masses. They are used in

    carbon dating and other radioactive dating processes. The combination of a mass spectrometer

    and a gas chromatograph makes a powerful tool for the detection of trace quantities of

    contaminants or toxins. A number of satellites and spacecraft have mass spectrometers for the

    identification of the small numbers of particles intercepted in space.

    Mass spectrometers are also widely used in space missions to measure the composition of

    plasmas. For example, the cassini spacecraft carries the Cassini Plasma spectrometer (CAPS),

    which measures the mass of ions in Saturn's magnetosphere.

    Mass spectrometry is an important method for the characterization of proteins.

    Pharmacokinetics is often studied using mass spectrometry because of the complex nature of

    the matrix (often blood or urine).

    Mass spectrometers are used for the analysis of residual gases in high vacuum systems.

    An atom probe is an instrument that combines time-of-flight mass spectrometry and field ion

    microscopy to map the location of individual atoms.

  • 7/30/2019 Amount of Substances

    13/19

    There are numerous different kinds of mass spectrometers, all working in slightly different

    ways, but the basic process involves broadly the same stages.

    1. You place the substance you want to study in a vacuum chamber inside the machine.2. The substance is bombarded with a beam of electrons so the atoms or molecules it

    contains are turned into ions. This process is calledionization.

  • 7/30/2019 Amount of Substances

    14/19

    3. The ions shoot out from the vacuum chamber into a powerful electric field(the regionthat develops between two metal plates charged to high voltages), which makes them

    accelerate. Ions of different atoms have different amounts of electric charge, and the

    more highly charged ones are accelerated most, so the ions separate out according to

    the amount of charge they have. (This stage is a bit like the way electrons are

    accelerated inside an old-style, cathode-ray television.)

    4. The ion beam shoots into a magnetic field (the invisible, magnetically active regionbetween the poles of a magnet). When moving particles with an electric charge enter a

    magnetic field, they bend into an arc, with lighter particles (and more positively charged

    ones) bending more than heavier ones (and more negatively charged ones). The ions

    split into a spectrum, with each different type of ion bent a different amount according

    to its mass and its electrical charge.

    5. A computerized, electrical detector records a spectrum pattern showing how many ionsarrive for each mass/charge. This can be used to identify the atoms or molecules in the

    original sample. In early spectrometers, photographic detectors were used instead,

    producing a chart of peaked lines called a mass spectrograph. In modern spectrometers,

    you slowly vary the magnetic field so each separate ion beam hits the detector in turn.

    2. Bomb Calorimeter

    Bomb Calorimeter

    Bomb calorimeter is used to determine the enthalpy of combustion for hydrocarbons or

    calorific value of a material. As the reaction is being measured, it has to be able to withstand a

    large pressure. In order to ignite the fuel, electrical energy is used. The surrounding air will be

    heated up as it expands and escapes through a copper tube that causes the air move out of the

    calorimeter. It will also heat up the water outside the copper tube of the calorimeter and the

    temperature of the water aids in the calculation of the calorie content of the fuel. Bomb

    calorimeter consists of a strong cylindrical stainless steel bomb where combustion of the fuel

    occur, a lid which is screwed to the body of the bomb and a perfect gas tight seal, two stainless

    steel electrodes and an oxygen inlet valve, a copper tube which surrounded by air and water to

    prevent heat loss due to radiation, an electrically operating stirrer and a Beckmanns

    thermometer to maintain constant distribution of heat and measure the temperature increase.

    http://www.explainthatstuff.com/electricity.htmlhttp://www.explainthatstuff.com/television.htmlhttp://www.explainthatstuff.com/magnetism.htmlhttp://www.explainthatstuff.com/how-film-cameras-work.htmlhttp://www.explainthatstuff.com/how-film-cameras-work.htmlhttp://www.explainthatstuff.com/magnetism.htmlhttp://www.explainthatstuff.com/television.htmlhttp://www.explainthatstuff.com/electricity.html
  • 7/30/2019 Amount of Substances

    15/19

    Since it is a stainless steel bomb, the combustion will occurs at constant volume and no work is

    done. Therefore, the change in internal energy for the bomb calorimeter is zero. Estimating the

    heat capacity is one way of the calibration of the bomb calorimeter. The heat capacity can be

    estimated by considering it to be consisted of 450g water and 750g stainless steel. For more

    accurate result, the heat capacity of the bomb calorimeter must be measured by depositing a

    known amount of energy into it and observing the increase in temperature. This can be done by

    burning of a standard with known internal change energy or doing an electrical work by

    allowing current to pass through a resistor. These are the working steps of the bombcalorimeter. An unknown mass between 0.5-1.0g of a fuel is taken in a crucible which

    supported over a ring. Fine mercury wire which touching the fuel sample is stretched across the

    electrodes. The lid is filled will the oxygen of pressure of 25atm and closes it tightly. Then, the

    bomb calorimeter will be lowered into a copper calorimeter which contains water with known

    mass. The water then stirs and the initial temperature is recorded. The circuit is complete as the

    electrodes are connected to a 6V battery. The fuel sample burns and heat is released. Uniform

    stirring of the water is continued and the final temperature is noted.

  • 7/30/2019 Amount of Substances

    16/19

    Primary standard

    In order to prepare a standard solution, an accurately weighed amount of a highly pure

    substance is dissolved. The substance is known as primary standard. A solution is prepared

    accurately give the needed concentration if it is to be dissolved is not a primary standard. This

    solution is then standardized by titrating with another solution which prepared by dissolving aweighed quantity of a primary standard substance. A primary standard solution can be

    prepared directly by taking the weight of a substance and then preparing a definite volume of

    the solution. It is used in the volumetry as a reference solution for standardization process.

    Primary standard is highly in pure state, soluble in desired solvent, must not suffer

    decomposition in the presence of the solvent, stable and not affected by the atmosphere, have

    a large equivalent weight in order for the weighing errors to be minimized, and its solution

    should not deteriorate on keeping. Primary standard can be determined to a high level of

    precision and reliability. For example, a typical acid-base titration can be done to determine the

    concentration of an unknown hydrochloric acid solution however; when it was titrated against

    sodium hydroxide, there will appear some uncertainty because of the lack of reliability of the

    sodium hydroxide solution. Primary standard substances will not always be used in

    standardization.

    Secondary standard

    The substances that do not fulfill the requirements of a primary standard and cannot be

    prepared by using direct weighing, they need to be standardized against a primary standard

    and afterwards it will be stable enough for titrimetric works which are known as secondary

    standard. Not all the secondary standard has the same properties as the primary standard. Itneeds to undergo a stoichiometric reaction with primary standard that have well defined end

    point. The solution for secondary standard is prepared by using pure substance which is not

    very stable at room temperature due to hygroscopic or undergoes decomposition or oxidation

    or reduction reaction. The solution for secondary standard cannot be prepared with exact

    concentration by taking its weight and diluting it to a definite volume. In order to know the

    exact concentration, standardization against primary standard solution need to be carried out.

    Example of a secondary standard is potassium permanganate which has to be standardized first

    and then used for qualitative analysis. The substances such as oxalic acid are used as secondary

    standard substance. These substances consist of water of crystallization hence, it is difficult todry and therefore can be used as secondary standard.

    Working standard

    Working standard is a solution that has an exact concentration of an element which is prepared

    by using primary standard or secondary standard.

  • 7/30/2019 Amount of Substances

    17/19

    3. Triple beam balance

    Measurement principle of triple beam balance is it measures mass. The mass you weight is

    same on the moon as on earth. Gravity is taken out of the equation, unlike a spring balance that

    measures weight and would measure an article to be 1/6 of the weight on the moon as it would

    be on earth using the same spring balance that relied on gravity. The principle is the moment,

    or turning force or torque, calculated by force x distance. The force is the gravity acting on each

    side of the fulcrum of the balance, and distance is the distance from the fulcrum. Gravity is

    constant wherever you are. If two forces each side of the fulcrum are equal, the balance will be

    horizontal. Torque are been measured by balance beam, and moment and toque each side will

    be equivalent when the beam is balanced. It is a function of arm length and applied force. The

    conclusion is a balance beam measures true mass, and not just weight that changes with

    changing gravitational force.

    Measurement uncertainties for triple beam balance is 0.02g. Its sensitivity is 0.1g.

    Method & procedure used:

  • 7/30/2019 Amount of Substances

    18/19

    To measure masses very precisely use triple beam balance. The reading error is 0.05 gram.

    When the pan is empty, move the three sliders on the three beams to their leftmost, and it

    will reads zero. If the indicator on the right is not same with the fixed point, then calibrate it

    by turning the set screw on the left under the pan.

    When the balance has been calibrated, place the object on the pan.

    Move 100 gram slider to the right until the indicator drops below the fixed point. The notched

    position that go to the left of this point shows the number of hundreds of grams.

    Next move the 10 gram to the right until the indicator drops below the fixed point. The

    notched position that went to the left of this point shows the number of tens of grams.

    The beam in front is not notched so the slider can move anywhere along the beam. The units

    used for this beam are grams and the tick marks between the boldface numbers indicate tenths

    of grams.

    To find the total mass of the object is by adding the numbers from the three beams.

    Calibrated means in zeroing balance state. Triple beam balance can be calibrated by:

    Slide all riders back to zero point. Make sure that the pointer is resting at the zero on the indicator line. If not, turn the adjustment knob until they are in line.

    How the traceability to national standards can be maintained:

    Measuring equipment should be tested against a standard of higher accuracy. National

    standards laboratories such as NPL undertake international comparisons in order to establish

    worldwide on the accepted value of fundamental measurement units. 17 nations signed the

    Convention of the Meter on May in Paris. This provided the foundations for the establishment

    of the International System of Units, international abbreviation SI in 1960. Since then, national

    standards laboratories have cooperated in the development of measurement standards that

    are traceable to the SI. Any organization achieve traceability to national standards through the

    correct use of an appropriate traceable standard from NPL.

  • 7/30/2019 Amount of Substances

    19/19