move report web 22.8.05 - lighting lablightinglab.fi/cietc1-58/files/move_report.pdf · move-...

24
MOVE Performance based model for mesopic photometry Mesopic Optimisation of Visual Efficiency AB HELSINKI UNIVERSITY OF TECHNOLOGY Lighting Laboratory

Upload: leduong

Post on 27-Jul-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

����

�������������� �� ���������������������

�������� �������� � ��� ������� ������� ��

AB HELSINKI UNIVERSITY OF TECHNOLOGY

Lighting Laboratory

Page 2: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

Helsinki University of Technology Lighting laboratory P.O.Box 3000 FIN-02015 HUT Finland www.lightinglab.fi Report nro 35 ISBN 951-22-7566-X ISSN 1455-7541 Espoo, Finland 2005 Editors: Eloholma Marjukka, Halonen Liisa

Page 3: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

Project: MOVE – Mesopic Optimisation of Visual Efficiency

PROJECT CO-ORDINATOR:

Helsinki University of Technology (HUT) FIN

PARTNERS: City University (CU) UK National Physical Laboratory (NPL) UK TNO Human Factors (TNO) NL Darmstadt University of Technology (TUD) D University of Veszprém (UV) HU

Project duration: 2002-2004

Project funded by the European Community under the ‘Competitive and Sustainable Growth’ Programme (1998-2002)

Page 4: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 3/23

Performance based model for mesopic photometry Table of Contents

1 Introduction ........................................................................................................ 4

2 New approach for developing mesopic photometry....................................... 5 2.1. Night-time driving ......................................................................................... 5 2.2. Multi-technique experimental method........................................................... 6 2.3. Common set of parameters.......................................................................... 7

3 Performance based methods in establishing visibility data .......................... 8 3.1. Achromatic contrast threshold...................................................................... 8 3.2. Reaction time ............................................................................................. 11 3.3. Recognition threshold................................................................................. 14

4 Visual performance modelling ........................................................................ 15 4.1. Linear i.e. practical model .......................................................................... 15 4.2. Chromatic model ........................................................................................ 17

5 Model for mesopic photometry....................................................................... 19 5.1. The recommended model .......................................................................... 19 5.2. Applicability of the model............................................................................ 21

References .............................................................................................................. 22

Contact information of MOVE project partners ................................................... 23

Page 5: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 4/23

1 Introduction

Mesopic photometry has a long history, but most mesopic photometric models have concentrated on brightness evaluation. Recently for traffic lighting applications the attention of illuminating engineering has turned towards the question of most efficient lighting for early recognition of obstacles and dangerous situations. From other investigations it is well known that photopic visibility of details is not a brightness contrast recognition task but can be better described by a luminance contrast. At present there are no internationally recognised luminous efficiency functions for the mesopic region. Resulting from this, there are no products, policies, standards dealing with the mesopic light levels. New luminosity functions are needed to describe correctly mesopic visibility, the key item for e.g. traffic lighting.

A consortium of European Institutions prepared for the European Commission a project proposal to establish mesopic luminous efficiency functions and to set up adequate working practices that are accepted throughout the European Community and also world-wide. The EC project MOVE (Mesopic Optimisation of Visual Efficiency) was carried out during 2002-2004 in the EC Fifth Framework programme (G6RD-CT-2001-00598). The project consortium brought together multi-disciplinary expertise in the areas of lighting engineering, vision science, metrology, human behaviour and image processing. The outcome of the project is a mesopic model based on visual task performance.

The objective of the project was to define relevant spectral sensitivity functions for the luminance range of 0.01 – 10 cd/m2, where standardisation is most urgently needed. To find out how the eye’s spectral sensitivity, lighting conditions and visual stimuli interact, a large number of experiments were needed with the human eye as a detector. Earlier research has shown that for different visual tasks different spectral responsivity functions can be measured. Therefore the MOVE consortium decided to first consider the different measuring methods that could lead to the determination of the visibility function. It was decided that at the different laboratories different techniques should be used. The vision experiments of the project generated data for mesopic visibility functions using several visual criteria.

The majority of spectral luminous efficiency curves obtained to date in the mesopic range have been acquired using heterochromatic brightness matching. In MOVE, the emphasis was placed on night-time driving performance and the attempt to describe luminous efficiency in a realistic way. Emphasis was placed on methods like contrast threshold, reaction time, recognition, which are highly linked to driving performance. A common set of parameters was used in the experiments to ensure that although all the partners were using different equipment and techniques there was a degree of compatibility between them. This assisted with the comparison of results and their use in the development of a model for mesopic photometry.

The vision experiments carried out in the project split the task of night-time driving into three sub-tasks, each of which was investigated separately. These subtasks can be characterised by the questions ‘Can it be seen?’ (contrast threshold), ‘How quickly? (reaction time) and ‘What is it?’ (recognition threshold). Based on the experimental data a practical model for mesopic photometry for night-time driving was developed. The practical model is applicable for all three sub-tasks studied within this project, and for the general task of night-time driving, in situations where the background and target both have fairly broad spectral power distributions. This encompasses most situations that will be encountered in practice, including situations involving LED sources. It should therefore be considered for implementation by highways agencies, road lighting designers, lighting manufacturers, regulatory authorities and all other organisations and users who may be working in the mesopic domain.

The practical model is proposed to the CIE (Commission Internationale de l’Éclairage) for consideration as the basis for a task-based system of mesopic photometry. The results of the project are integrated in the CIE TC1-58 work to form the basis of a new standard on performance based mesopic photometry.

Page 6: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 5/23

2 New approach for developing mesopic photometry

2.1. Night-time driving

The measurement of mesopic spectral luminous efficiency is dependent on the experimental method used. In the earlier works on mesopic photometry the experimental techniques used have only considered one aspect of visual performance such as recognition of brightness differences or measurement of thresholds. In general, the process of seeing consists of an initial searching, seeing, perceiving and recognition process. Flicker photometry and direct brightness matching are not representative of tasks undertaken whilst driving at night-time, rather, visual performance during driving consists of the steps mentioned above. Therefore, these aspects should be considered in the new approach of developing a model for mesopic vision. In MOVE the emphasis was placed on night-time driving and the attempt to describe luminous efficiency in a realistic way.

The visual performance of night-time driving was divided into three visual subtasks characterised by the questions:

Can it be seen? 2. How quickly? 3. What is it? The first subtask – Can it be seen? - is related to detection threshold i.e. the minimum

luminance contrast of a target against its surroundings that is necessary for the observers to become aware of objects in their visual field.

The second subtask – How quickly? - is related to reaction times i.e. the time between the onset of a visual stimulus and the detection response of that stimulus under conditions where the observer is instructed to respond manually by pressing a button as quickly as possible. In night-time driving conditions reaction times play an important role for safe driving.

The third subtask – What is it? - is related to recognition and identification of the target i.e. the perception of fine details. This visual subtask describes how the target, after being detected, is being recognized according to its visual details and a more conscious and wilful action can be initiated.

Figure 1. Visual performance of night-time driving characterised by three visual subtasks.

Page 7: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 6/23

2.2. Multi-technique experimental method

The idea of MOVE was to generate data for mesopic visibility functions using several visual criteria. No investigation at this depth has been undertaken previously. The project steered away from conventional techniques, where only one aspect of visual performance has been considered at time, and developed a multi-technique system.

Many test methods are needed to generate the required visibility data. All test parameters and test methods cannot be covered with one experimental technique and one test apparatus. The only way to have enough data for building a comprehensive mesopic model is to generate visibility data with various experiments using different experimental techniques. This was implemented by dividing the work between the partner laboratories so that the tests cover all the required methods and parameters. After careful consideration of the required visibility data, the consortium developed experimental techniques to quantify the visibility of targets when performing each of the three visual tasks. Work between the parallel experiments was distributed and linked in a way that allowed the exchange of data between test locations and input of data from one test to another.

For each visual subtask, data was simultaneously generated in two to four laboratories using different experimental methods in each location. This approach differs from earlier techniques. The consortium has developed an alternate multi-technique system where the visual performance of driving is described with three different subtasks.

The parallel tests complemented each other in two ways. Firstly, the tests were carried out using different experimental techniques based on different visual criteria. Secondly, the information was exchanged between the different test locations; the same test parameters and parameter combinations could be examined in different locations. During the project a combined analysis of all the test data were made by all partners. Thus information between test locations was constantly exchanged and joint decisions on further work and parameter adjustments could be made during the project course.

Figure 2. In MOVE vision experiments were conducted at five partner laboratories using a

common set of parameter values, but different experimental methods and equipment.

Page 8: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 7/23

2.3. Common set of parameters

The vision experiments to generate the visibility data for the new model included investigations using quasi-monochromatic light (hbw = 10 nm), narrow-band (hbw = 15-40 nm) light and light with broadband spectra. A decision was taken early in the project to define a common set of conditions to be used in the experiments, to ensure that although all the partners were using different equipment and techniques there was a degree of compatibility between them, which would assist with the comparison of results and their use in the development of a model for mesopic photometry. The selected conditions were:

• Background luminance: 0.01, 0.1, 1, 10 cd/m 2 (photopic) • Target eccentricity: 0° and 10° • Target size: 2° and another size of 0.3° • Nearly steady presentation, with ∆t ≥ 3 s (or ∆t ≤ 500 ms for part of RT experiments) The aim of the foveal (0o) investigations was to examine whether V(λ) can be used to

adequately describe spectral luminous efficiency of small centrally viewed targets in the mesopic range.

It was foreseen, that a proper connection with the present photopic function is needed, and thus also the highest luminance level of 10 cd/m2 was included. In addition to the luminance levels given above, experiments were also conducted at intermediate levels 0.03, 0.3 and 3 cd/m2 to provide data for validating the new model.

The total number of observers in the visibility experiments was 109. In addition, experiments to provide supportive data for describing mesopic vision were conducted with 14 observers, but these were not included in the actual modelling process.

All the optical radiation measurement equipment being used to conduct these vision experiments were calibrated by NPL against the UK photometric and spectroradiometric scales. Linking all the research measurements to a common scale in this way will help to ensure that the results from all the partner laboratories are compatible and can be combined into a single model, and will also aid in gaining international acceptance for recommendations made as a result of the work.

Page 9: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 8/23

3 Performance based methods in establishing visibility data

3.1. Achromatic contrast threshold

Achromatic threshold, i.e. increment and/or decrement of visual target’s intensity around the threshold to detect the target, was the method used to generate data for the first subtask. Achromatic threshold data was generated using a sustained presentation (> 3 s) with three different experimental settings: uniform hemisphere (modified Goldman perimeter), large homogeneous screen and screen with computer-controlled projector.

Modified Goldman perimeter

The perimeter at HUT comprised a 600-mm diameter white painted hemisphere, which gives a uniform background luminance over the entire visual field. The 1.5º × 2º elliptical stimulus was reflected on the surface of the hemisphere. The hemisphere surface luminance and the stimulus luminance could be independently controlled.

The visual stimuli were produced using coloured blue, green and red filters, which transmitted in the wavelength bands 400-500 nm, 500-600 nm and 600-700 nm, respectively. The hemisphere surface spectra could be either white light produced by a daylight metal halide lamp (correlated colour temperature 4000 K) or coloured red, green and blue. The stimulus was superimposed on the background, so the spectrum of the stimulus was a combination of the background and stimulus spectra and the stimulus luminance was always higher than the background luminance. The target was presented on-axis or at 10° eccentricity from the foveal fixation. Viewing was binocular.

The subject was asked to manually adjust the stimulus luminance to minimise its contrast relative to the background. Two threshold measurements were made where the subject was asked to find the point at which he/she could just detect the target by increasing the target luminance (increase value) and by decreasing the target luminance (decrease value). The threshold contrast was determined from the average of two increase and two decrease threshold values.

Figure 3. The modified Goldman perimeter (HUT).

Page 10: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 9/23

Large homogenous screen

In the large homogenous screen at TUD a slide projector was used to provide an adapting background with a colour temperature of approximately 2856 K. The slide projector illuminated a screen that was painted matt white. To avoid the de-saturation of the stimuli the target was not superimposed on the background but presented through a hole in the screen. The aperture of the hole defined the size of the target, which was a 2o-diameter circle.

The light of a metal halide lamp passed through several different lenses and a pockels cell to produce pulses of light that formed the target (rectangular pulse of 3 sec duration). The luminance of the target could be adjusted by changing the output level of the pockels cell using a 12-bit digital input controlled by a computer. Ahead of and behind the pockels cell there was a polarizer and an analyser. After passing through the pockels cell the light-beam was widened and focused to the observer’s eye.

Monochromatic filters were used to modify the spectral power distribution of the target. The half-bandwidth (HBW) of the filters was HBW ≤ 10 nm. The target was presented on-axis or at 10° eccentricity. The photopic luminance of the stimulus was manually increased until the observer just perceived the pulse to determine the achromatic threshold for the coloured stimulus. Each trial started with a three times presentation of the λ = 550 nm stimulus. Afterwards the wavelength is decreased from λ = 700 nm to λ = 380 nm in 10 nm steps. Observations were made monocularly with the right eye.

Slideprojector

Observer

Key

Background Target

Control unit

Light sourcePockels cell

Polarizor Analysor

Filter

Chin rest

Figure 4. Test set-up for measuring modified absolute thresholds (TUD).

Page 11: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 10/23

Screen with computer-controlled projector

The experimental set-up for the increment threshold measurements at UV consisted of two projectors; a slide projector which provided the adapting background and a data projector (video projector) to superimpose the target stimulus on the background. For the background the projector was set to a correlated colour temperature of approximately 2856 K, and by use of neutral filters in front of the projector different background luminance levels were achieved.

Metal interference filters were placed in front of the data projector to produce the target spectra. The wavelength of the nominal maximum transmittance of the interference filters varied between λ = 420 nm and λ = 700 nm in 10 nm steps. The half-bandwidth (HBW) of the filters was equal to 10 nm. The data projector was computer-controlled, enabling the luminance of the target to be adjusted by the operator and also controlling the position at which the target was displayed (either centrally or at 10° eccentricity). The target was a 2° diameter disk which was presented for 3 seconds (quasi-static observation). The observer had to respond for every presentation whether he/she could see the target or not. The luminance of the target was adjusted at each wavelength by the operator. The viewing was binocular.

Slide projectorAchromatic filter package

Screen

Disk

ObserverKeyboard

Data projector

Interference filter

PC

Figure 5. Test set-up for measuring increment thresholds (UV).

Table 1 summarises the experimental conditions and parameters of the achromatic

contrast threshold measurements in the three laboratories.

Table 1. Conditions and parameters used in achromatic threshold experiments.

Subtask 1. Can it be seen?

Partner Method Target LB=cd/m2 Spectra Background Target Ecc. Subjects

HUT Achromatic contrast threshold 1.5°x 2° 0.01,0.1,1,10

blue, green, red (100 nm)/

white 4000 K

blue, green, red (100 nm) 0°,10° 19

TUD Absolute threshold method 2° 0.01,0.1, 1,10 Similar to Illuminant A

380-700 nm 10 nm steps, hbw=10 nm

0°,10° 6

UV Increment threshold method 2° 0.01, 0.1, 1 Similar to

Illumant A 450-700 nm, hbw=10 nm 0°,10° 10

Page 12: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 11/23

3.2. Reaction time

Reaction times were measured for a number of coloured targets with different spectral characteristics. Reaction time data were generated with four different experimental settings: large uniform hemisphere, computer controlled CRT display, driving simulator and large homogeneous screen.

Large uniform hemisphere

A large uniformly illuminated hemispherical surface was used as the background in reaction time measurements at HUT. The surface provided a background field of 180o x 180o, which subjects viewed at a distance of 99 cm. The interior was painted white and illuminated with daylight fluorescent lamps (correlated colour temperature 5400 K), which located diagonally near the edges of the hemisphere. The fluorescent lamps were dimmed with neutral density filters and fine adjustment of the background luminance was done with dimmable electronic ballasts. Due to filtering and light output adjustment of the lamps, the correlated colour temperatures of the background reduced to approximately 4360 K, 4820 K, and 5020 K, respectively.

The stimuli were produced with light emitting diodes (LEDs). The target was a rectangular pulse of duration 3000 ms or 500 ms. Five stimulus colours were used in the measurements: blue, cyan, green, amber, and red. The peak wavelengths of the LEDs were respectively: 466 nm, 503 nm, 522 nm, 594 nm, and 639 nm. The half-bandwidths of the LEDs were between 16-37 nm. The LEDs were positioned on the outside of the hemisphere in holes pointing towards the observer. The visual size of the stimulus was 0.29° (17.4’). The target was presented on-axis or at 10° eccentricity.

The subject fixated to the centre of the hemisphere (foveal stimulus location) with his/her chin and forehead supported by a rest. The subject was instructed to press a response button as quickly as possible after seeing the target. Reaction times were recorded using a system flash meter with 1 ms resolution. Viewing was binocular.

Figure 6. In the large uniform hemisphere LEDs (visual targets) were presented in foveal and in peripheral vision at 10° eccentricity (HUT).

Page 13: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 12/23

Computer controlled CRT display

Reaction time measurements were performed at CU using CRT display-based system on which a custom-designed computer-controlled program was run. The subject was positioned at a viewing distance of 70 cm with the aid of a chin rest and forehead support. At this distance the background field subtended 23o x 36o visual angle. The background was uniform and of fixed chromaticity (x = 0.305, y = 0.323).

Neutral density filters were used to reduce the luminance of the target and background, with fine adjustments made by modifying the voltage across the red, green and blue electron guns. The filters were mounted between the display and the subject, and a hood prevented any light from the display reaching the subject without first passing through the filter.

A target based on a Landolt ring was presented at one of six different positions on a circle with a radius of 10o visual angle centred on the fixation marker. The target ring had an outer diameter of 2o visual angle and an inner diameter of 1.2o (thickness 0.4o), but a 45o sector removed to provide a gap. An example of the stimulus is shown in Figure 7. The target ring was presented for 500 ms. The subject was required to press a button as soon as he/she detected the target. Reaction times were recorded with a temporal resolution of 1 ms. Reaction times were measured for achromatic targets, i.e., targets with the same relative spectral power distribution (SPD) as the background, and for coloured targets, i.e. targets with different relative SPDs than the background. The stimulus was viewed binocularly.

Figure 7. An example of the stimuli used in the reaction times study. A target based on a Landolt ring with outer diameter 2o is flashed for 500 ms at one of six positions on circle with a radius of 10o centered on the fixation marker (CU).

Driving simulator

The TNO driving simulator had a real car housing and instruments, Figure 8. Subjects viewed a road scene projected onto a cylindrical screen using three red, green, blue projectors. At a viewing distance of 3.75 m the scene subtended 30o in height and 120o in width [made up from 40o sections from each projector]. The scene consisted of a single-lane road corresponding to a width of 3.5 m, a textured roadside to provide optic flow, and uniform sky. The simulated road included straight sections plus bends to the right and the left over a route of 14 km. The ratios of the mean luminance of the road : roadside : sky was 0.4 : 1 : 1.3. The target was a circle, 2o in diameter. It was presented 1.1o below the horizon, on the roadside, and at an eccentricities of ± 10o from the centre of the road.

The luminance of the adapting background was taken as the luminance of the roadside. Subjects had to steer the car along the simulated road and press a button as soon as they had detected a target. The response button was taped to the subject's right index finger and was activated by tapping it against the steering wheel. The course was driven at a fixed speed of 70 km/h and again at a fixed speed of 100 km/h. Reaction times were recorded with the computer that controlled the driving simulator with a resolution of 1 ms. Targets were presented until a response was recorded up to a maximum of 3 seconds. The period between target presentations varied randomly between 7 and 11 sec.

Page 14: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 13/23

Subjects wore goggles fitted with neutral density filters to reduce the luminance of the display and fine adjustments were made by modifying the red, green, and blue output of the projectors. Viewing was binocular.

Figure 8. Driving simulator (TNO).

Large homogenous screen

Reaction times were measured at TUD with a set-up identical to that used for modified absolute threshold measurements (Chapter 3.1), but a different method was used for the measurement of reaction times. In reaction time experiments the irradiance of the target was increased until the target was just visible, by adjusting the output of the pockels cell. The irradiance of the target was then set at a number of levels above this just visible-level and 2-4 reaction times were recorded for each level. Targets were circular with a diameter of 2o and were presented for 3 seconds at 10o eccentricity. The period between each presentation was varied randomly between 2 and 10 s. Reaction time was recorded by the computer that controlled the optical system. Viewing was monocular.

Table 2 summarises the experimental conditions and parameters of the reaction time

measurements in the four laboratories. Table 2. Conditions and parameters used in reaction time experiments.

Subtask 2. How quickly?

Partner Method Target LB=cd/m2 Spectra Background Target Ecc. Subjects

HUT RT Large hemisphere 0.3° 0.01,0.1, 1,10 Daylight 5400 K

5 LEDs, hbw=16-37 nm 0°,10° 23

CU RT CRT display 2° 0.01, 0.1,1,10 Broadband white

Different broadband 10° 11

TNO RT Driving simulator 2° 0.01, 0.1, 1, 10 (10 not

with red, blue)

Broadband white, yellow,

red, blue

Same as background

± 10°

23

TUD RT Large screen 2° 0.3, 1 Similar to Illuminant A

380-700 nm 10 nm steps, hbw=10 nm

10° 7

Page 15: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE- Performance based model for mesopic photometry 14/23

3.3. Recognition threshold

Screen with computer-controlled projector Achromatic recognition threshold was measured using a screen with computer-controlled

projector. This set-up was similar to the one in Figure 5, except that in achromatic recognition threshold measurements the visual target was a Landolt-C ring. The signal sent to the data projector was generated by a computer program that provided the presentation of a 2° diameter Landolt-C with manual setting of its luminance, and, for the experiment itself, the presentation of the 2° Landolt-C, with different directions of the opening. The target was presented either foveally or peripherally at 100 eccentricity and the presentation time was 3 seconds (quasi-static observation). The observer had to respond to every presentation whether he/she could recognize the opening direction of the Landolt-C ring. The viewing was binocular.

Table 3. Conditions and parameters used in recognition threshold experiments.

What is it / Can it be resolved?

Partner Method Target LB=cd/m2 Spectra Background Target Ecc. Subjects

UV Recognition threshold 2° 0.01, 0.1, 1 Similar to Illumant A

450 – 700 nm,hbw=10 nm 10° 10

Page 16: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 15/23

4 Visual performance modelling

4.1. Linear i.e. practical model

A model for mesopic photometry was developed on the basis of the results from the vision experiments carried out in the project. An important consideration within the MOVE project was to draw a distinction between a practical system of mesopic photometry and a model of the eye response in the mesopic. A practical system of photometry must be grounded on human vision and allow predictions of task performance to be made which are in reasonable agreement with the actual ability to perform these tasks. It will not necessarily describe the details of the performance of the human visual system, however, nor will it predict the actual spectral sensitivity of the eye under any given conditions. In this way mesopic photometry is no different from the system of photometry that has been developed for photopic vision and which is based on the use of the internationally agreed photopic luminous efficiency function, V(λ).

The majority of the experiments in MOVE used relatively broadband targets presented against a white or coloured (broadband) background of specified photopic luminance; these results could be fitted to various potential forms of model to determine the relevant model parameters and to test the validity of the candidate models. For some of the experiments, however, narrow-band (quasi-monochromatic) targets were used, which enabled the appropriate spectral sensitivity functions to be determined more directly.

The key aim of the MOVE project was to develop a practical system of photometry that is based on the ability of the eye to perform the task of night-time driving (or to sub-tasks of this key task), that obeys Abney’s laws of additivity, and, critically, that will be suited to practical implementation by road lighting engineers, specifiers, instrument manufacturers etc. It was recognised that this need for practical implementation would place some constraints on the system developed within the project, namely that the spectral sensitivity curve should tend to the universally adopted photopic luminous efficiency function, V(λ), at the upper end of the mesopic range and to the scotopic luminous efficiency function, V′(λ), at the lower end of the mesopic range. Thus these constraints were applied during much of the modelling process. However it was also appreciated (based on the results of work by other researchers and by the results of the experiments to measure spectral sensitivity during this project) that the spectral sensitivity curves predicted by such a system would be unlikely to agree with those determined by direct measurement. These curves typically show a distinctive ‘three peak’ behaviour, which is believed to be associated with the influence of colour channels in the eye response mechanism and which cannot be modelled using combinations of V(λ) and V′(λ). More complex models utilising such response curves were therefore also examined to see how well they fitted the results of the experiments to investigate task performance, but it was agreed that such a system would only be recommended if it provided a significant improvement in the fit of the model to the data. In this context, it is important to appreciate that a significant advantage of using combinations of V(λ) and V′(λ) is that the effective mesopic luminance can be determined directly from the photopic and scotopic luminances (measured using broadband photometers), without requiring knowledge of the relative spectral power distribution of the source being measured.

Each type of experiment (reaction time, contrast threshold etc.) was modelled separately with each background light level taken in turn. The model parameters were adjusted to provide the best fit between the modelled behaviour and that actually observed.

For all three tasks considered during this project it was found that a satisfactory practical model (i.e. a model which tends to V(λ), at the upper end of the mesopic region and to V′(λ) at the lower end of the mesopic) takes the form:

Page 17: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 16/23

M(x)Vmes(λ) = x V(λ) + (1-x) V′(λ), (1)

where Vmes(λ) represents the relative spectral luminous efficiency function under the given conditions and M(x) is a normalising function such that Vmes(λ) attains a maximum value of 1. The parameter x is a function of the photopic and scotopic values of the background quantity (luminance, illuminance etc.).

The modelling approach used meant that x could also be a function of the task, but it was found that an acceptably good fit to all the data sets was obtained without requiring this extra degree of complexity. In other words, a single model of mesopic photometry was found to apply for all three tasks considered in this project. For simplicity this model will be referred to as the ‘practical model’ in the rest of this report.

Using equation (1), the corresponding mesopic background quantity, Qmes, is then given by: Qmes = k(x) ∫ E(λ) Vmes(λ) dλ (2)

where k(x) is a constant to convert to the appropriate absolute units for the chosen quantity (this is derived from M(x) above) and E(λ) is the spectral distribution of the background.

For a target eccentricity of 0° (i.e. foveal vision) the results showed that the photopic luminous efficiency curve, V(λ), gives an acceptably good fit to the data at all levels, except at the lowest luminance level 0.01 cd/m2.

The practical model is available in the form of a MATLAB module. This has been both developed and tested using data generated during the MOVE project. Inputs to this module are the relevant photopic and scotopic quantity (luminance, illuminance, etc.); the output is the corresponding mesopic quantity (luminance, illuminance etc.). The software uses an iterative approach to determine the correct value of x for a given background for which the photopic and scotopic values are known. Once x has been determined, the absolute quantity can be calculated as in equation (2).

The MATLAB module encapsulating the practical model for mesopic photometry is supported by a number of other MATLAB modules developed in the course of the project to assist with the modelling. These include algorithms to:

• optimise model parameters allowing for uncertainty and statistical noise in the input quantities using non-linear least square techniques;

• determine the ‘quality of fit’ (or uncertainty) of the model for inputs which have different degrees of reliability and different variables;

• estimate spectral data from broadband data using a maximum information entropy approach.

These supporting modules may be of use to other researchers working in mesopic photometry, as well as those involved in modelling inhomogeneous data sets in a wider range of applications. These, together with the module for the practical model, are available on the CIETC1-58 website (www.lightinglab.fi/CIETC1-58/MOVE) and thus made freely available to all researchers with an interest in mesopic photometry or in modelling in general. It is hoped that researchers working in mesopic photometry will also use it with their own data.

Page 18: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 17/23

4.2. Chromatic model

Part of the experiments in the MOVE project were carried out using quasi-monochromatic targets, which allowed the spectral luminous efficiency curves to be measured directly. These experiments comprised of four different experimental methods:

• Achromatic absolute contrast threshold (TUD) using circular targets with peak wavelengths spaced at 10 nm intervals from 380 nm to 700 nm and half-bandwidth of 10 nm, against (but not superimposed on) a background similar to CIE Illuminant A

• Achromatic increment threshold measurements (UV) using circular targets with peak wavelengths spaced at 10 nm intervals from 450 nm to 700 nm and half-bandwidth of 10 nm, superimposed on a background similar to CIE Illuminant A

• Reaction time measurements (TUD) using circular targets with peak wavelengths spaced at 10 nm intervals from 380 nm to 700 nm and half-bandwidth of 10 nm, against (but not superimposed on) a background similar to CIE Illuminant A

• Achromatic recognition threshold measurements (UV) using Landolt-C targets with peak wavelengths spaced at 10 nm intervals from 450 nm to 700 nm and half-bandwidth of 10 nm, superimposed on a background similar to CIE Illuminant A

These measurements yielded similar, but not identical, 3-peak curves. The distinctive ‘three peak’ behaviour has also been observed by some other researchers and is believed to be associated with the influence of colour channels in the eye response mechanism. The observed three peaks were least apparent at low background luminances but were strongly evident at intermediate and high levels in the mesopic. This behaviour was impossible to model in such a way that a transformation to V(λ) at the upper end of the mesopic could be achieved. Figure 9 shows the spectral sensitivity function generated by the practical model at 0.1 cd/m2 luminance level and the directly measured spectral sensitivity data by the achromatic contrast threshold technique at the same luminance level (100 eccentricity). The fit of the practical model to the data is poor. This is not surprising since, as indicated previously, the directly measured sensitivity curves show evidence of the influence of chromatic channels which are not described by the practical model.

Figure 9. Spectral sensitivity generated using the practical model at 0.1 cd/m2 compared with

that measured directly using contrast threshold techniques (eccentricity 100).

Page 19: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 18/23

A different approach was therefore taken for the analysis of these results; they were

modelled using the L(λ), M(λ) and S(λ) functions as well as V(λ) and V′(λ), but only at the specific background luminances for which they had been measured. The fit of these modelled curves to the task performance data was then evaluated. In modelling the directly measured spectral sensitivity curves using the L(λ), M(λ), S(λ),V(λ) and V′(λ) functions, the model takes the form:

Vmes(λ) = a1 V(λ) + a2 V′(λ) + a3 | L(λ) – a4 M(λ)| + a5 S(λ) (3)

This ‘chromatic model’ aims to account for contributions from a cone-based achromatic mechanism (with the spectral sensitivity of V(λ)), the rods (with spectral sensitivity of V'(λ)), the L-M opponent colour channel and a chromatic contribution from the S-cones.

Figure 10 shows the spectral sensitivity function generated by the chromatic model at 0.1 cd/m2 luminance level and the directly measured spectral sensitivity data by the achromatic contrast threshold technique at the same luminance level (100 eccentricity). In this case the fit of the model to the measured points is good, indicting that the chromatic model gives a significantly better prediction of performance than the practical model for tasks involving monochromatic stimuli at threshold.

Figure 10. Spectral sensitivity generated using the chromatic model at 0.1 cd/m2 compared with

that measured directly using contrast threshold techniques (eccentricity 100).

The directly-measured curves for foveal vision show significant differences from those at 10°, but still exhibit some three-peaked behaviour and are significantly different from V(λ) and V′(λ). This indicates that a new model of performance may be required for on-axis tasks involving monochromatic stimuli, which is different from both the practical model and the chromatic model. Such tasks, however, are unlikely to arise in night-time driving situations.

Compared to the practical model, the chromatic model gives a significantly better match to the directly measured spectral sensitivity curves (at 10°) and is a better predictor of performance for contrast threshold experiments using monochromatic stimuli. The question therefore arises as to whether the chromatic model gives a significantly better fit to the results from the experiments using broadband stimuli, which are more closely related to the practical tasks involved in night-time driving. When a chromatic model was fitted to the non-monochromatic reaction time results it was found that the fit using the chromatic model compared to the linear model was significantly worse.

Page 20: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 19/23

5 Model for mesopic photometry

5.1. The recommended model

As described above, the linear i.e. ‘practical model’ developed within the MOVE project gives a good match to the results of experiments to investigate reaction time and the ability to detect and recognise a target (for a target eccentricity of 10°). In other words, it is applicable to the three sub-tasks identified for study in the project, which are relevant to the task of night-time driving, namely: can an object be seen; how quickly can it be seen; can the object be identified. For the first time, the modelling has included analysis of the influence of uncertainty in the experimental data and allows an assessment of the uncertainty in the model to be determined. This analysis shows that although the fit to the data is not perfect, the deviations are sufficiently small that they can be largely explained by uncertainty and noise in the experimental results. Furthermore, that the level of agreement between the results for different sub-tasks is sufficiently good to allow them to be combined into a single model; no significant benefit can be obtained by seeking to optimise the practical model more precisely for each individual task.

A linear model of the form developed in this project could have been based on the use of either V(λ) and V′(λ) or V10(λ) and V′(λ). Investigations showed, however, that the fit to the results with the former is similar to that using the latter, and there is therefore no advantage associated with the use of V10(λ). There are definite advantages to using V(λ), on the other hand, since this ensures greater compatibility with the established system of photometry used in the photopic region.

As also demonstrated above, the practical model does not give a good match to the spectral sensitivity curves obtained using targets with narrow spectral bandwidths. However the more complex chromatic model developed using these directly measured spectral sensitivity generally does not appear to give a better match to the results of the investigations using broadband targets, even where these broadband targets are strongly coloured (e.g. LEDs). Thus there is no advantage to using this chromatic model in most ‘real’ situations.

In addition, the results confirm findings by other researchers, that for a target eccentricity of 0° (i.e. foveal vision) the photopic luminous efficiency curve, V(λ), generally gives an acceptably good prediction of visual performance at all levels, except 0.01 cd/m2. It does not fit well with the directly-measured spectral responsivity curves, however; a more complex model may be needed for foveal as well as for off-axis measurements for tasks involving monochromatic stimuli.

It is evident that the chromatic model offers no clear advantage over the practical model in most situations; certainly insufficient advantage to outweigh the disadvantage of its being much more difficult to implement. Furthermore, the chromatic model cannot provide a transition between V(λ) at the upper end of the mesopic and V′(λ) at the lower end, which means that it will meet strong resistance from the general lighting community.

The practical model has been compared with the X model developed by Rea et al. (2004) - see Tables 4 and 5, which show the values of x calculated with the practical model of MOVE for various S/P ratios and can be compared with a similar table in the paper by Rea et al. Although the general trend in x is similar in both cases, the X model indicates that the transition from mesopic behaviour to photopic behaviour (i.e. an x value of 1) occurs at a lower luminance than in the MOVE model. This is probably due to the different experimental conditions used in MOVE and by Rea et al. Each model has been optimised for the individual data sets considered and therefore provides a better fit to those particular data. Because the MOVE model has been optimised for a much wider range of tasks (3 in MOVE, 1 in X model) and observers (over 120 in MOVE, 4 in X model) than the X model, the MOVE model may be more widely applicable. Further research will be needed before definitive conclusions can be drawn to propose a new mesopic model for general use.

Page 21: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 20/23

Table 4. x-values for the MOVE model as a function of photopic luminance (columns) and S/P ratios (rows)

S/P 0.01 0.03 0.1 0.3 1 3 10

0.25 0.0000 0.3035 0.4874 0.6645 0.8141 0.9705 0.35 0.0625 0.3161 0.4930 0.6672 0.8152 0.9707 0.45 0.1034 0.3268 0.4983 0.6697 0.8164 0.9708 0.55 0.1283 0.3362 0.5032 0.6722 0.8175 0.9710 0.65 0.1467 0.3445 0.5078 0.6745 0.8186 0.9711 0.75 0.1616 0.3521 0.5122 0.6768 0.8197 0.9713 0.85 0.0000 0.1742 0.3589 0.5162 0.6790 0.8207 0.9714 0.95 0.0087 0.1851 0.3652 0.5201 0.6811 0.8217 0.9716 1.05 0.0243 0.1947 0.3711 0.5238 0.6832 0.8227 0.9717 1.15 0.0376 0.2034 0.3765 0.5272 0.6851 0.8237 0.9718 1.25 0.0492 0.2113 0.3817 0.5306 0.6871 0.8247 0.9720 1.35 0.0595 0.2186 0.3865 0.5338 0.6889 0.8256 0.9721 1.45 0.0688 0.2253 0.3910 0.5369 0.6907 0.8266 0.9723 1.55 0.0772 0.2315 0.3954 0.5398 0.6925 0.8275 0.9724 1.65 0.0849 0.2374 0.3995 0.5427 0.6942 0.8284 0.9725 1.75 0.0921 0.2429 0.4034 0.5454 0.6959 0.8292 0.9727 1.85 0.0988 0.2481 0.4071 0.5480 0.6975 0.8301 0.9728 1.95 0.1050 0.2529 0.4107 0.5506 0.6991 0.8310 0.9729 2.05 0.1109 0.2576 0.4142 0.5531 0.7006 0.8318 0.9730 2.15 0.1165 0.2620 0.4175 0.5554 0.7022 0.8326 0.9732 2.25 0.1217 0.2662 0.4206 0.5578 0.7036 0.8334 0.9733 2.35 0.1267 0.2702 0.4237 0.5600 0.7051 0.8342 0.9734 2.45 0.1315 0.2741 0.4267 0.5622 0.7065 0.8350 0.9735 2.55 0.1360 0.2778 0.4296 0.5643 0.7079 0.8357 0.9737 2.65 0.1403 0.2814 0.4323 0.5664 0.7092 0.8365 0.9738 2.75 0.1445 0.2848 0.4350 0.5684 0.7106 0.8372 0.9739

Table 5. Mesopic luminance of the MOVE model as a function of photopic luminance (columns) and S/P ratios (rows)

S/P 0.01 0.03 0.1 0.3 1 3 10

0.25 0.0025 0.0075 0.0640 0.2331 0.8735 2.8108 9.9095 0.35 0.0035 0.0133 0.0698 0.2430 0.8914 2.8372 9.9220 0.45 0.0045 0.0172 0.0751 0.2525 0.9090 2.8632 9.9344 0.55 0.0055 0.0201 0.0801 0.2616 0.9262 2.8888 9.9466 0.65 0.0065 0.0226 0.0848 0.2706 0.9431 2.9141 9.9587 0.75 0.0075 0.0249 0.0894 0.2792 0.9597 2.9391 9.9706 0.85 0.0085 0.0270 0.0937 0.2877 0.9760 2.9637 9.9825 0.95 0.0095 0.0290 0.0979 0.2959 0.9921 2.9880 9.9942 1.05 0.0105 0.0309 0.1020 0.3040 1.0079 3.0120 10.0058 1.15 0.0114 0.0328 0.1060 0.3119 1.0234 3.0356 10.0173 1.25 0.0122 0.0345 0.1099 0.3197 1.0387 3.0590 10.0286 1.35 0.0130 0.0362 0.1136 0.3273 1.0538 3.0822 10.0399 1.45 0.0138 0.0378 0.1173 0.3348 1.0686 3.1050 10.0510 1.55 0.0146 0.0394 0.1209 0.3421 1.0833 3.1276 10.0621 1.65 0.0153 0.0410 0.1245 0.3493 1.0978 3.1499 10.0730 1.75 0.0160 0.0425 0.1280 0.3565 1.1121 3.1720 10.0838 1.85 0.0167 0.0440 0.1314 0.3635 1.1262 3.1939 10.0945 1.95 0.0174 0.0455 0.1348 0.3704 1.1401 3.2155 10.1051 2.05 0.0180 0.0469 0.1381 0.3772 1.1539 3.2368 10.1156 2.15 0.0187 0.0483 0.1413 0.3840 1.1675 3.2580 10.1260 2.25 0.0193 0.0497 0.1445 0.3906 1.1810 3.2789 10.1363 2.35 0.0199 0.0511 0.1477 0.3972 1.1943 3.2997 10.1466 2.45 0.0205 0.0524 0.1509 0.4037 1.2075 3.3202 10.1567 2.55 0.0211 0.0538 0.1539 0.4101 1.2205 3.3405 10.1667 2.65 0.0217 0.0551 0.1570 0.4165 1.2334 3.3606 10.1766 2.75 0.0223 0.0564 0.1600 0.4228 1.2462 3.3806 10.1865

Page 22: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 21/23

The practical model is recommended as a result of the MOVE project for the task of night-

time driving, and is proposed to the CIE for consideration as the basis for a task-based system of mesopic photometry. The MOVE model will be integrated into international standardisation work through CIE Technical Committee TC1-58 ‘Visual Performance in the Mesopic Region’.

5.2. Applicability of the model

The practical (i.e. recommended) MOVE model is applicable for all three sub-tasks studied within this project, and for the general task of night-time driving, in situations where the background and target both have fairly broad spectral power distributions. This encompasses most situations that will be encountered in practice, including situations involving LED sources. It should therefore be considered for implementation by highways agencies, road lighting designers, lighting manufacturers, regulatory authorities and all other organisations and users who may be working in the mesopic domain.

For situations where on-axis viewing of relatively small targets (<2o) is critical, it has been shown in the project that the V(λ) function provides an acceptably good prediction of task performance regardless of the background level, except at 0.01 cd m-2 or for quasi-monochromatic stimuli (neither of which condition is likely to be encountered in a practical night-time driving situation). This difference in visual performance as a function of target eccentricity may have significant implications for road lighting design in the future. For example, different specification and measurement criteria may be necessary in situations where there is a different weighting of on-axis and peripheral visual information to process. These points will require careful consideration within the various specification organisations (highways agencies etc.)

The practical model is not suited to situations where it is critical that the activity of the chromatic mechanisms is taken into account e.g. when the colourfulness (chromatic saturation) of the target is especially high, or when the target has a very narrow spectral power distribution. In this case the chromatic model based on the quasi-monochromatic methods may be more appropriate. For such situations, however, it must be recognised that V(λ) is similarly a poor predictor of performance in the photopic and a completely new system of photometry may be necessary. This is beyond the scope of this project, but is being investigated within the CIE.

The data gathered in the MOVE project represents a significant resource. The modelling undertaken as part of the project has attempted to characterise its main features, particularly those relevant to providing guidance to lighting engineers with applications to night-time driving. However, it is recommended that further analysis of the data by researchers is undertaken to extend and/or qualify the results presented here so that the full benefit of measurement data may be realised.

Page 23: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 22/23

References

Commission Internationale de l’Éclairage (1989). Mesopic photometry: History, special problems and practical solutions. CIE Central Bureau CIE 81. Commission Internationale de l’Éclairage (2001) Testing of supplementary systems of photometry. CIE Central Bureau CIE 141. Eloholma M, Halonen L. (2003) New Approach for the Development of Mesopic Dimensioning Scales. Publication CIE x025, Proceedings of CIE Expert Symposium 2002 Temporal and Spatial Aspects of Light and Colour Perception and Measurement. Eloholma M, Liesiö M, Halonen L, Walkey H, Goodman T, Alferdinck J, Freiding A, Bodrogi P, Várady G (2005) Mesopic models - from brightness matching to visual performance of night-time driving: a review, Lighting Res. Technol. In Press. Goodman T (1997) Workshop- Making measurements in the mesopic region, Proceedings of the NPL-CIE-UK Conference Visual Scales; Photometric and Colorimetric Aspects. Vienna: CIE Central Bureau CIE x012. Rea M.S, Bullough J.D, Freyssinier-Nova J.P, Bierman A. A proposed unified system of photometry. Lighting Res. Technol. 2004; 36 (2): 85-111.

Page 24: MOVE Report web 22.8.05 - Lighting Lablightinglab.fi/CIETC1-58/files/MOVE_Report.pdf · MOVE- Performance based model for mesopic photometry 3/23 Performance based model for mesopic

MOVE New mesopic model 23/23

Contact information of MOVE project partners

Helsinki University of Technology (HUT) Lighting Laboratory

TNO Human Factors Research Institute

Add. P.O.Box 3000 FIN-02015 HUT FINLAND Fax +358 9 4514982

Liisa Halonen tel. +358 9 4512418 email. [email protected]

Marjukka Eloholma tel. +358 9 4514981 email. [email protected]

Add. P.O. Box 23 3769 ZG Soesterberg NETHERLANDS Fax +31 34 6353977

Johan Alferdinck tel. +31 346356 311 email. [email protected]

City University (CU) Applied Vision Research Center

Darmstadt University of Technology (TUD) Dep. of Lighting Technology

Add. Tait Building Northampton Square London EC1V 0HB, UK Fax +44 20 7040 8355

John Barbur tel. +44 20 7040 4307 email. [email protected]

Helen Walkey tel. +44 20 7040 0262 email. [email protected]

Add. Hochschulstrasse 4 a D-64289 Darmstadt GERMANY Fax +49 6151 165468

Achim Freiding tel. +49 6151 165342 email. [email protected]

National Physical Laboratory (NPL) Centre for Optical and Analytical Measurement

University of Veszprém (UV) Dep. of Image Processing and Neurocomputing

Add. Queens Road, Teddington Middlesex TW11 0LW UK Fax +44 208 9436283

Teresa Goodman tel. +44 20 8943 6863 email. [email protected]

Alistair Forbes tel. + 44 20 8943 6348 email. [email protected]

Add. Egyetem u. 10 H-8201 Veszprem HUNGARY Fax +36 88 423466

János Schanda tel. +36 88 422022 email. [email protected]

Péter Bodrogi tel. + 36 88 422022 email. [email protected]