master edm freyermuth oliver 2012

Upload: ricky-rock

Post on 06-Jul-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    1/113

    Development of a Dead Time MeasurementSystem for the Compass Experiment using

    FPGA Technology

    Masterarbeit in Physik von

    Oliver Freyermuth

    angefertigt im Physikalischen Institut

    vorgelegt der Mathematisch–Naturwissenschaftlichen Fakultät der RheinischenFriedrich-Wilhelms-Universität Bonn

    Oktober  

    Betreuung: Prof. Dr. Jörg Pretz

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    2/113

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    3/113

    . Gutachter:   Prof. Dr. Jörg Pretz

    . Gutachter:   Prof. Dr. Friedrich Klein

    I hereby thank Prof. Jörg Pretz and Prof. Friedrich Klein of the  Compass   trigger group fortheir support in finding the topic and during the thesis. Furthermore, I would like to thankDr. Jürgen Hannappel for his continuous and very helpful advice, Daniel Hammann for supportwith the DAQ integration in Bonn and the BGO-OD group for the possibility to work in theirlab and test the setup during several beam times in Bonn.

    My thanks also go to the whole  Compass  trigger group, especially Dr. Jens Barth for hisintroduction to the group at CERN and the experimental setup, advice concerning observed

    structures and many detailed explanations. Further thanks go to Johannes Bernhard whoperformed recabling and exchanged parts of the setup at CERN in addition to sharing hisexperience and providing advice.

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    4/113

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    5/113

    Abstract

    The  Compass experiment uses a multi-purpose two-stage spectrometer located at the CERN-SPS accelerator in Geneva. Its purpose is the study of hadron structure and spectroscopy withhigh intensity muon and hadron beams resulting in high trigger rates and a large beam halocomponent. Particles from the beam halo cause unwanted triggers, which have to be suppressedusing a veto system. Due to random coincidences, this veto on the trigger also suppresses

    wanted events and thus reduces both incoming flux and absolute cross section by introducing adead time into the system.

    As part of this thesis, the dead time and its time structure are simulated and measuredon different time scales. To do so, a simulation software and a firmware for the FPGA-basedCompass veto coincidence board are developed. The new firmware allows to measure variationsof the duty cycle of arbitrary signals over longer time scales like the time in spill while atthe same time recording periodicities of the signal trains or the correlation between them ontimescales in the order of nanoseconds.

    Care is taken to allow for maximum portability in the sense of future hardware evolution,ease of usability and simplicity. A full set of data is included in  Compass  datasets to allow fordetailed analysis of time structures and their evolution over longer timescales.

    Expected and unexpected structures found in the time evolution of the rates are measuredand discussed. A special focus is given to the small timescales to check for substructures whichmay result not only from physics, but also from bad timings or reflections. Dead time effectsare seen and discussed, and a non-invasive, web-based live-visualization of averaged parametersrecorded independently from the DAQ of the experiment is implemented to allow for immediatequality checks while data taking is in progress.

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    6/113

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    7/113

    Contents

    . Introduction   .. Deep Inelastic Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ..   Compass Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Detector Setup at  Compass   . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Trigger System at  Compass   . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Deeply Virtual Compton Scattering   . . . . . . . . . . . . . . . . . . . . . . . .   .. Structure of Dead Time at Compass   . . . . . . . . . . . . . . . . . . . . . . . .  

    .. Measuring Time Structure   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ... Duty Cycle   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ... Autocorrelation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    . Simulation   .. Schematic Process   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Implemented Functionality   . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Results for Crosschecking   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. GUI–Development   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Simulations with the GUI   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    . Firmware Development

     

    .. Requirements   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Hardware   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. Methods of Measurement   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    ... Duty Cycle   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ... Autocorrelation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    .. Existing Firmware   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .. New Implementation   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    ... Duty Cycle Implementation   . . . . . . . . . . . . . . . . . . . . . . . . .   ... Autocorrelation Implementation   . . . . . . . . . . . . . . . . . . . . . .   ... Sweeping Logic   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    ... Modular Implementation   . . . . . . . . . . . . . . . . . . . . . . . . . .   ... The Slimfast Multioption Counter   . . . . . . . . . . . . . . . . . . . . .   ... Cabling   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    .. Stages of Development   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ... Testing Phases   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    .. Characterization   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  

    I

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    8/113

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    9/113

    Bibliography   XXXI

    Declaration of Authorship   XXXIII

    III

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    10/113

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    11/113

    . Introduction

    The idea that matter is composed of elementary particles dates back to ancient Greek philosophy,even before suitable methods of measurement were thought of. Up to the end of the th century,the theory of a fundamental particle “atomos” (from Greek:   indivisible ) was widely acceptedand could explain stoichiometrical relations in chemistry. Using this approach, John Daltoncreated the atomic theory (described in [Dal],   ) which was based on the idea thatall chemical reactions can be described as recombinations, separations or rearrangements of identical, indivisible particles.

    The principles of his atomic theory do still hold today for chemical reactions, but it was

    discovered in    by J. J. Thompson while working with cathode rays that much lighterparticles, the negatively charged electrons, exist and he concluded that these are part of everyatom. His discovery triggered experiments with newly developed measurement devices and wasthe foundation for new models. By   , it was possible to perform mass spectroscopy of ionswhich led to the discovery of the particles.

    During the following years, further experiments led to the discovery of a complete “particlezoo”, and it was inevitable that an underlying systematic structure was to be found describingthese as combinations of a smaller amount of subatomic, fundamental particles to explain thesereactions.

    In , the results of deep inelastic scattering experiments at the  Stanford Linear AcceleratorCenter “SLAC” led to the assumption that nucleons are built of smaller constituents, the quarks:

    The Standard Model was born. It describes the electromagnetic, weak and strong nuclearinteractions governing the universe of the fundamental particles: Quarks and Leptons as theconstituents of matter, Gauge bosons as interaction particles and the Higgs.

    A simple “summation” of the properties of the constituents fails at describing some propertiesof the corresponding atomic particle. The puzzle is still missing many experimental pieces.

    One of those crucial pieces is the spin, which is a type of angular momentum and a propertyof each elementary particle. However, although the spin for the atomic particles can bemeasured and a simple model for distribution among the different quarks works fine in theory,measurements of the EMC (European  Muon  Collaboration) showed that the major quantile of the spin of a nucleon is indeed not contributed by the quark spin components. Other candidatesto carry the spin are available in the model: Gluons are needed to describe the interaction

    between the quarks, confining them together to form a subatomic particle.The assumption is that the spin of an atomic particle such as the proton, which has a spin of 

     /2, can be decomposed into different components:

    1

    2 =

     1

    2∆Σ + Lq + ∆G + Lg

    The measurements of EMC [J.  ] gave a result of  ∆Σ = 0.12± 0.17  for the percentage of thespin carried by the quark spin component, which necessarily means that, assuming the model is

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    12/113

    CHAPTER . INTRODUCTION

    correct, the spin has to be contained in the orbital angular momenta of quarks (Lq) or gluons(Lg), the spin contributed by the gluon spins (∆G) or a combination of these.

    A necessity for such measurements is polarization of both beam and target in a deep inelasticscattering experiment to determine asymmetries resulting from carried spin quantiles. In

    addition to that, the reaction channels which provide the necessary information exclusively showvery low cross sections.

    Figure   ..:   Logo of the Compass exper-iment at CERN

    To allow for such measurements, the   COmmon   Muon and   ProtonApparatus for   Structure and   Spectroscopy, short  Compass, was born[The]. Is uses a two-stage multipurpose spectrometer located at CERN.  The beam is taken from the SPS   and it is possible to switch betweendifferent beam particles using a target in the beamline before the exper-imental target itself. The original mission of  Compass  was to measure∆G, the quantile of spin carried by the gluons.

    To accomplish this measurement, an extensive Deep Inelastic Scatteringsetup is needed. The basics for this kind of measurement are described

    in section  .. It is feasible to use beam and target polarization for asym-metry measurements. To identify the reaction channels, PID (Particle

    IDentification) is needed. The spectrometer setup offers these possibilities as will be discussedin section  ..

    This thesis will focus on the trigger system of the experiment. To allow for selection of wantedevents and suppress unwanted interactions, the trigger system has to perform online cuts oncoincidences or charge deposit without introducing an unwanted bias. At the same time, thesuppression of unwanted events, which is done with a veto condition, should not unnecessarilysuppress wanted events, which cannot be prevented due to the introduction of a dead time into

    the system. This specific part of the setup is subject to the analysis performed in this thesisand will be discussed in much more detail in later chapters.

    Conseil  Européen pour la  Recherche  Nucléaire, the European Organization for Nuclear Research operates the

    world’s largest particle physics laboratory in northwest Geneva, Switzerland. The term “CERN” is also used

    when referring to the laboratory complex itself.Super  Proton  Synchrotron, a synchrotron-type accelerator at CERN.

    Page    Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    13/113

    .. DEEP INELASTIC SCATTERING CHAPTER . INTRODUCTION

    .. Deep Inelastic Scattering

    Figure  .:  Deep inelastic scattering: Thespecific particles are only chosen for thisexample, but the process is a generalone. The incoming lepton   l   scatters ona quark (u   in this example) of the nuc-leon  N , involving the respective four mo-menta of the incoming lepton  k , the out-going lepton  k and the nucleon  P .

    Deep  Inelastic  Scattering (DIS) is a reaction which gives insight into subatomic structures.

    Its resolution is given by the De-Broglie wavelength  λ  =   h/ p  and as such determined by the-momentum transfer via the virtual photon  γ ∗ and limited by the momentum of the beamparticles.

    In general, deep inelastic scattering describes the interaction of a lepton with a subatomicparticle in a nucleon (c.f. fig.  .). The cross section for this reaction in comparison to otherprocesses grows with the energy of the lepton beam hitting a nucleon target.

    q 2 =

    k − k2

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    14/113

    CHAPTER . INTRODUCTION   ..   COMPASS  BEAM

    characterization is possible with the Bjorken scaling variable   xB, which is a measure of theinelasticity of the process, as it is defined by the quantile of the nucleon–momentum the quarkinside the nucleon was carrying before scattering took place. For that reason, the measurementof  xB  allows to characterize the distribution of quark momenta within the nucleon.

    The cross section is proportional to the charge square and the momentum of the quark  q  whichhas been hit, but if a polarization asymmetry can be determined, this can be used to measurethe momentum distribution of quarks with different spin separately. For that reason, deepinelastic scattering experiments allow to characterize the momentum distribution of subatomicparticles composing a nucleon and are necessary to search for the distribution of the nucleonspin.

    Figure ..:  “Golden Channel” of Photon Gluon Fu-sion in Deep Inelastic Scattering (illustration basedon [Ded, p.  ])

    The   Compass   experiment’s main missionwas the measurement of the quantile of thespin carried by the gluons, so a process whichinvolves a lepton-gluon-interaction was needed.As gluons are neither electrically nor weakly

    charged, photon-gluon-fusion comes to mind.Assuming that the polarized beam particleemits a virtual photon, this can interact witha gluon from the polarized nucleon generatinga quark-antiquark-pair.

    The experiment allows for very good particleID of  K  and π, so an exclusively reconstructible process should involve strangeness in the finalstate to simplify detection. One such process, the so-called “golden channel”  [Ded], whichreceived this name due to the high signal-to-background-ratio, can be seen in fig.  ..

    .. Compass Beam

    Figure   ..:   Maximum Parity Violationin  π –decay

    The beam used in the  Compass experiment is generatedby a primary target upstream the experiment’s beamlinefrom the SPS (Super Proton Synchrotron), which can bechosen so as to switch between different particle beams.The original setup used a high-energetic (≈   160 GeV)muon-beam with a polarization of  ≈ 0.8. The polariza-tion is achieved although the initial proton beam fromthe SPS is not polarized by utilizing maximum parityviolation as can be seen in fig.  .: The proton-beam hits

    a primary target and mainly  π+

    are produced. These are boosted in the direction of the initialproton beam, so their major, mass-carrying decay product  µ+ will also be boosted in directionof the beam. To conserve angular momentum the decay products  µ+ and ν µ  must have oppositespin, as the  π  has spin  0. The decay is a weak interaction and for that reason subject to parityviolation: Only left-handed particles or right-handed antiparticles take part in weak interactions.Due to the strong boost in forward direction, this fixes the helicity and thus the spin for  ν µ  andµ+ so the resulting beam is effectively polarized by maximum parity violation.

    The polarization is not perfect especially due to the broadness of the momentum range,

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    15/113

    .. DETECTOR SETUP AT COMPASS   CHAPTER  . INTRODUCTION

    the K+–admixture and the angular spread of the beam, but this method is by far easier thancreating a polarized electron beam and still allows for great flexibility.

    .. Detector Setup at Compass

    Figure  ..:  Schematic Setup of the Compass spectrometer for muon beam

    The  Compass   experiment performs measurements with a two-stage spectrometer, consistingof one part resolving small angles and a secondary part with similar setup to resolve tracks

    at large angles. Both parts of the spectrometer are equipped with detectors for tracking andPID (Particle IDentification). The primary and the secondary, experimental target as well asbeamline parameters can be changed / tuned without long downtimes, which allows to changeand extend this setup for specific measurements.

    Starting from the beamline, the first detector hit by the incoming particles is the BMS ( BeamMomentum  Station), which allows to determine the momentum of the beam particles. Theparticles are then guided along a  m   [The,  p.   ] long beamline before the  Compasspolarized target is hit (or missed). A   m long two-stage spectrometer setup allows for trackingand particle identification for both small and large angles [The, p.   ]. A block schematicillustrating this basic setup can be seen in fig.  ..

    The detectors themselves have been selected from a wide variety of technologies according to

    their specific task. Scintillating fibres are used mainly for timing and tracking “close to the beam”,where high rates are to be expected. For increased angular resolution, the  Scintillating Fibres(short: “SciFi”) are accompanied by GEM (Gas  Electron  Multiplier) detectors. Furthermore,silicon strip detectors and Micromegas (Micromesh   Gaseous   Structure) allow for a hightracking resolution even when close to the target area.

    Larger angles require a larger detector area, which allows for the usage of less expensivetechnologies with lower angular resolution. For that reason, MWPCs (MultiWire ProportionalChambers) and large  Drift  Chambers are used further downstream. The major tracking setup

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    16/113

    CHAPTER . INTRODUCTION   .. DETECTOR SETUP AT COMPASS

    is completed by Straw Tubes in several layers with a resolution comparable to that of the driftchambers.

    To reconstruct the processes described in section  ., a good PID is needed. This is done witha dedicated RICH detector (Ring  Imaging  CHerenkov): Cherenkov light is produced when a

    charged particle is moving faster than the speed of light in the medium it is passing throughby polarizing neighbouring atoms. The angle at which the light is emitted is directly relatedto the speed of the charged particle, so measuring the angle relates to measuring the velocity.Combining this with momentum information, PID is possible. A RICH measures the angle of the cherenkov light, which is emitted as a cone around the particle track, by using mirrors toreflect the light onto highly sensitive detectors. In software, the ring and thus the angle can bereconstructed, which finally can allow to identify the initial particle.

    For PID, calorimetry is also essential. Calorimetry allows to determine the energy a particlecarries (or a fraction of it), for example by guiding it through a known material and measuringthe light produced. In general, an electromagnetic calorimeter is used to determine the energyof  e± and  γ , while a hadronic calorimeter behind that can in many cases stop highly energetic

    hadrons and determine their energy.

    Figure   ..:   Schematic setup of an experiment allowing forparticle identification.

    Full tracks of particles through aschematic setup could look as shownin fig.   .. One important piecewhich has not been mentioned upto now is the use of known andadjustable magnetic fields to bendtracks of charged particles. This isneeded to determine their charge andmomentum for charged particle iden-tification.

    With the use of magnets, the setupshows a different behaviour for differ-ent particles: In general, the ionizing

    particles (charged leptons and hadrons) can be seen by the tracking detectors. The combinationof tracking detectors and magnets allows to determine the charge and momentum of the chargedparticles by measuring the angular correlation, the identification of uncharged particles isalso possible. The RICH allows to determine the velocity of charged particles and covers thedomain between   GeV/c0. . . GeV/c0   for π± and K± separation, while the calorimeters allow fordetermination of the carried energy. It is important to realize that misidentification is alwayspossible and the given example is idealized.

    At  Compass, the identification of muons is also done by checking the tracks remaining after

    a lead block with another tracking detector. This so-called  µ–wall should only see muons, as allother particles would have been stopped by the wall. Neutrinos and possibly other near-masslessnon-ionizing particles might pass through, but these are not seen by the detector either.

    The setup shown in fig.   .   is present twice at  Compass  as it is a two-stage spectrometer(the expensive and complex RICH is missing in the second half). The second magnet has a five

    times higher magnetic field to bend tracks of particles with higher momenta, and the distancesbetween the detector arrays are larger in the small-angle spectrometer.

    This implies that the mass which the particles scattered into small angles have to pass through

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    17/113

    .. TRIGGER SYSTEM AT  COMPASS   CHAPTER  . INTRODUCTION

    Figure ..:  A snapshot of the complete detector setup of the Compass experiment. Details of the setup arechanged according to the current programme including repositioning and changing detectors. The BMS isnot included in this image, as it is too far upstream the beamline.

    is not too large, as otherwise, they might be stopped too early in the spectrometer. This limitsthe mass of the initial tracking detectors close to the target and requires holes in the centres of all detectors with high mass and sufficient beam alignment.

    A complete setup is shown in fig.  .. As can be seen, the experiment is more than   m long,which implies high requirements for beam / detector alignment.

    .. Trigger System at Compass

    In all modern particle physics experiments, it is impossible to record a full continuous stream of 

    data from all detectors. For that reason, a “live” pre-selection of events to record is needed.This pre-selection is performed by the trigger, which is characterized by its  efficiency  and  purity .

    The  efficiency  of a trigger system describes its ability to generate a pulse if a  good event   isseen versus its disability to miss such  good events . The definition of a  good event  is given by thephysics one wants to measure and should only be influenced by a known or minimal bias: Nounwanted weighting of a specific subtype of events or favouring of an energy range should occur.

    The  purity   is given by the number of “bad events”, i.e. an event containing only beam-haloparticles but no target interaction also causing a trigger, versus the number of  good events . Onecan thus define the efficiency as the percentage of   good events  which caused a trigger and thepurity as the percentage of triggers caused by  good events .

    One speciality at  Compass is the high number of halo particles which also hit the hodoscopesused for triggering, thus causing false triggers. For that reason, a veto system suppressing suchtriggers is necessary to increase the purity of the trigger. This also affects the efficiency of thecomplete system due to a dead time (dead  means that no further triggers are accepted) caused

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    18/113

    CHAPTER . INTRODUCTION   .. DEEPLY VIRTUAL COMPTON SCATTERING

    Figure ..:  A snapshot of the complete detector setup of the Compass experiment with trigger hodoscopesadded in a setup used with muon beam.

    by the readout of all detectors. Due to hardware limitations, the veto itself also generates adead time – this dead time is the subject of this thesis and will be discussed in detail in theforthcoming chapters.

    Figure ..:  Schematic illustrating the necessity and function of a veto on trigger pulses.

    As the trigger logic needs to ac-cept high rates and at the same time

    provide fast and accurate timing in-formation, several dedicated detect-ors have been added for this pur-pose, as can be seen in fig.   .. Theexisting detectors are also used inthe logic to allow for specific onlineselections, such as requiring a min-imum energy deposit in calorimet-ers or “time of flight”–coincidenceswithin a given time window. Suchselections are made in hardware, us-

    ing cables and NIM-logic or FPGAs, and provided by the trigger group. During data taking,the different triggers can be selected and pre-scaled to create a trigger-mix specific to therequirements of the measurement.

    .. Deeply Virtual Compton Scattering

    Planned measurements after    at   Compass   include the further investigation of nucleonstructure. The framework of Generalized Parton Distributions (GPDs) offers a three-dimensional

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    19/113

    .. DEEPLY VIRTUAL COMPTON SCATTERING CHAPTER . INTRODUCTION

    picture of the quarks building up the nucleon. The experimental handle needed to measure GPDsis given by processes within Deep Inelastic Scattering, namely DVCS (Deeply  Virtual ComptonScattering) and DVMP (Deeply Virtual Meson Production). The  Compass experiment is well-suited for DVCS measurements after some specific extensions and can cover a large kinematic

    range from xB ≈ 0.005  up to  xB ≈ 0.1 which is not explorable by any other comparable existingor planned facility in the near future [FtCc].

    T   =

    p p

    γ ∗

    γ 

    +

    p p

    γ ∗

    γ 

    +

    γ γ ∗

    GPDs

    p p

    Bethe Heitler   Deeply Virtual Compton Scattering

    Figure  ..:  The major components of the amplitude  T  of the    p →    p  γ –processes.

    DVCS is a process related to Compton–scattering: In the latter process, a photon hits anelectron, transfers energy and a photon with differing wavelength results. In DVCS, a virtualphoton emitted by a lepton interacts with a parton from a nucleon, e.g. a proton  p. Again, areal photon is produced, which gives an experimental handle on the energy transfer that took

    place. For a DVCS-process, the proton is also found in the final state, gains momentum andcan be observed as a   recoil proton .

    This specifies the needed extensions to the  Compass setup for the DVCS-programme: Anadditional electromagnetic calorimeter (ECAL) to increase the angular acceptance for photonsand an RPD (Recoil  Proton  Detector) to detect recoil protons emitted from the experimentaltarget.

    The major problem in a DVCS-measurement is the large admixture of other processes whichshow similar final states. Apart from wrongly reconstructed  π0 decays (if the second photon ismissed, they may be identified as a DVCS final state), the most important of these contributionsis given by Bethe Heitler formula, which describes the contribution by Bremsstrahlung processeswhich show the same final state with a photon being emitted by the lepton before or after

    interaction with a nucleon through exchange of a virtual photon. This leads to the completeamplitude  T  visualized in fig.  ..

    Cross sections with opposite beam charge and spin can be taken with the  Compass  setup. Ascan be shown [FtCc], the contribution by the Bethe Heitler processes cancels out, when usingthe difference of these cross sections. The sum of these cross sections contains the Bethe Heitlercross sections, but allows to single out the imaginary part of the interference term betweenBethe Heitler and DVCS. The Bethe Heitler cross section is theoretically well-described and canfor that reason be subtracted if the relative contribution is not too large.

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    20/113

    CHAPTER . INTRODUCTION   .. STRUCTURE OF DEAD TIME AT  COMPASS

    The observable for this measurement with an unpolarized proton target is not covered byan asymmetry measurement anymore, as was the case for the spin measurements, but theabsolute cross sections are needed. This means that the luminosity needs to be known. Thedetermination of absolute cross sections at   Compass   is not straightforward and influenced

    by systematic errors of not (yet) well-measurable size. Apart from small variations of thereconstruction efficiency (≈ 1.8 %), systematic errors from the beam flux determination (≈ 5 %),a negligible contribution from the DAQ dead time and the amount of target material (whichgives an absolute normalization), an unclear systematic error is introduced by the veto deadtime losses[The]. It depends on the beam intensity, which is different by about a factor of  for  µ− and  µ+ due to the initial proton beam from the SPS being used, and varies both overtime in spill and spill-wise.

    The measurement of this relative dead time and its evolution will be the focus of this thesis,as no direct measurement with devices available at   Compass   is not possible presently. Anadditional topic will be to check for structures in veto and trigger to possibly allow for areduction of these relative dead losses. The used methods of measurement, their implementation

    and the results will be discussed in the following chapters.

    .. Structure of Dead Time at Compass

    The time structure of the dead time is dominated by the time structure of the beam itself, i.e.by the time dependent rate. This structure is also seen on the rate of single discriminated pulsesof a detector which is exposed to parts of the main beam without any selective shielding orgating logic applied, which is true for the veto and the raw trigger without veto condition.

    As such, characterizing the structure of the veto and trigger rate over time also relates to thetime structure of the beam itself, but including additional effects of the detectors, cabling and

    electronics.In general, two classes of dead time are possible: Paralyzable and non-paralyzable behaviour.For a paralyzable detector, the dead time can be prolonged by incoming particles while thedetector is still dead, i.e. in the recovery time of the detector. An example would be a gaschamber, as the generated ions need time to drift to the electrodes and the entry of anotherionizing particle into the chamber is in general not prohibited. In the extreme case for highrate, the count rate even decreases with higher incoming flux for a paralyzable detector.

    Figure  ..:  Generation of the veto gate  in the Veto Coincidence Board

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    21/113

    .. MEASURING TIME STRUCTURE CHAPTER . INTRODUCTION

    The veto system at  Compass  uses a gate with fixed width generated with digital logic andis built as a non-paralyzable system. The channels of the veto detectors are discriminatedseparately and joined via a logical  OR(). On the Veto Coincidence Board, the resulting logicvalue is inverted and connected to the data input of a flip-flop. The logic trigger pulse is

    generated with the trigger-matrix separately and connected to the clock input of the sameflip-flop. As such, the state of the inverted veto is stored in the flip-flop on the leading edge of atrigger pulse. To prevent re-triggering, a “high” (logic one) on the flip-flops output deactivatesthe clock input via a clock enable-pin, thus forcing the system to be non-paralyzable. Theflip-flop is reset via an adjustable delay which defines the length of the logical pulses on thetrigger output in time.

    The choice to generate the veto   gate  after the logical   OR()  of all channels in dedicatedlogic was crucial to reduce the dead time of the system to an acceptable value [Tri]. Takingthis into account leads to some expectations for the autocorrelation measurements, i.e. thetrigger channel is expected to show a dead time after each pulse while the veto itself should not.Furthermore, the length of pulses on the trigger should always be the same, while the length of 

    veto pulses may vary.

    .. Measuring Time Structure

    To measure the time structure of a digital signal sequence, one in general has to identifythe needed time scales and resolution to select the quantities that should be measured andthe appropriate method. As outlined in section  ., the structures caused by the RF of thesynchrotron can be seen at a resolution in the order of several nanoseconds. As such, a highresolution for periodicities is favourable.

    On the other hand, the spill structure as a whole is also expected to vary in rate, so a

    long-term sampling is also of interest. The expected rate is very high, which means that asampling of each single pulse with high resolution would impose very harsh constraints on thehardware. This is not possible with readily available resources (as detailed in section  .) butcan be overcome by measuring different quantities for the needed requirements at the sametime.

    ... Duty Cycle

    The duty cycle in general is defined as the time a signal is in the “high” or “on” state over thetotal time. This has wide application not only in electronics, but also in daily life, e.g. dimmingLED-based backlights by using a high frequency periodic signal with adjustable duty cycle (e.g.by PWM,  Pulse  Width  Modulation).

    This breaks down to (c.f. fig.  .):

    D (δ, T ) =  δ 

    with δ  being the time the signal is high and  T  being the total time. When describing the beamduty cycle, a more generalized definition is advantageous to focus on variations of the duty

    a configurable logic to generate a trigger pulse from several coincident detector signals.

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    22/113

    CHAPTER . INTRODUCTION   .. MEASURING TIME STRUCTURE

    Figure  ..:  Illustration of the Duty Cycle of a logic signal sequence

    cycle, making it suitable for use in accelerator physics [Tha]:

    (x, T ) = 1 − σ2x

    x(t)2T = 1 −

    x(t)2

    T − x(t)2

    x(t)2T =

    x(t)

    2T 

    x(t)2T The   x(t)   is a rate-correlated time-dependent quantity, i.e. the measured rate or the beamintensity. The normalization results in a value of    = 100 %  for constant  x(t) =  x0, while  Ddepends on the actual value of  x0. This is the case because an ideally measured  δ  over a time  T is directly related to

    x(t)

    . For that reason both these definitions qualitatively yield the sameresult only differing by their normalization and thus their rate-dependency. The dead time of 

    a hardware trigger system is indeed rate-dependent and in the non-paralyzable case exactlydescribed by the classical definition of the duty cycle  D (δ, t).

    ... Autocorrelation

    The autocorrelation describes the self-similarity of a signal train in time. Mathematically, it canbe defined as:

    Ψxx(τ ) = limT →∞

    1

       T 0

    x(t) x(t + τ ) dt

    τ =0=

    x(t)2

    τ  describes the time lag between the two compared portions of the signal sequence  x(t), whileT   is the total measurement time, mathematically in the infinite limit. For  τ   = 0, the signaltrain is compared to itself without delay and the average of 

    x(t)

    2over the time  T   results.

    1 0

    1 1 00 0 1

    Table   ..:   Truth tablefor !XOR()

    For a digital signal, the autocorrelation breaks down to the invertedexclusive  OR()  of value and delayed value. This can be seen fromtable   .:  The result of the   !XOR()   is only   1   if both inputs are of 

    the same value. As such, the  x(t)  can be defined as   !XOR() when adigital input is used.

    Measuring the autocorrelation of a digital signal triggered by thebeam corresponds to a measurement of the autocorrelation of the rate.

    The autocorrelation function  Ψxx  shows the same periodicity as themeasured signal itself and as such can be averaged assuming a continuous time structure.

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    23/113

    . Simulation

    Simulating the effects of dead time is crucial to understand how suitable a method of measurementwill be before implementing it. Furthermore, the effects are not straightforward in many cases,especially if several channels with possibly individual dead times are involved as is the case forthe  Compass veto.

    As such, as a first approach to the topic, a simulation software is written and the plannedmethods of measurement are implemented. Care is taken that all possible variables can be variedand that the simulation speed allows a live interpretation of the results in a newly developedGUI based on ROOT and Qt.

    .. Schematic Process

    generate

    random

    trigger and

    veto events

    apply

    correlation,

    dead time

    effects and

    pulse lengths

    generate

    TDC spectra

    simulate

    measure-

    ments

    Figure  ..:  Schematic of the Simulation workflow

    The simulation software has to generate pseudo-pulses for the veto channels and a triggerchannel. The implementation done in this thesis limits itself to the simulation of  TDC-data of aTDC measuring leading and trailing edges in an arbitrary time window, as this is effectively alsowhat will be used as input for measurement: The hardware module for measurement acceptsdigital pulses only and applies the measurements described in section  ..

    The simulated data consists of hits at a (pseudo)-random time in  spill with adjustable pulselengths. The distribution of the pseudo-random times can be shaped by using a distributionfunction, effectively adjusting the simulated rate with time in spill.

    In the algorithm, only the leading edges are generated first, not taking care of the lengths of 

    the pulses yet. The next step is the application of correlation effects, which can be achieved bycopying a subset of the leading edges from the veto channels to the trigger or vice versa. Thesame is done between the different veto channels. The amount of copied information can bechanged at runtime. An additional jitter is not applied during correlation, as it is expected tobe too small to show an effect in the measurements done as part of this thesis.

    The next step applies the given pulse-length to the leading edges and a possibly larger deadtime for each pulse, e.g. caused by the discriminator. These two lengths are used as “suppressionlength” for following leading edges. This means that the still randomly distributed leading

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    24/113

    CHAPTER . SIMULATION   .. IMPLEMENTED FUNCTIONALITY

    edges have to be sorted in time first so pulses within the suppression length can be dropped,which means that a non-paralyzable system is assumed as explained in section   ..   At thispoint, an adjustable dead time is also applied to drop leading edges on each channel separately.After that, the separate veto channels are joined together similar to the hardware-OR() used at

    Compass: The leading edges are copied into a single large array and are again sorted in time,applying the pulse-length to each leading edge, but  not  applying any further dead time. Thismeans that the final veto does not have a straightforward dead time structure by itself, but isgoverned by the dead times of the separate channels, which means that a simulation with thecorresponding variables is the easiest way to check for possible unexpected effects.

    To allow for crosschecks of the simulated data with real data and also with the measurements

    later on, a trigger channel with vetos applied is also generated. Finally, the generated data isfilled into TDC  spectra and the methods described in section  . are applied, using a simulationof the implementation explained in section  ..

    .. Implemented FunctionalityThe basic functionality includes histograms of the simulated raw trigger hits and raw vetoes such

    that any obvious problems with the random number generator can be identified. To generatethe random time hits, the ROOT–interface to UNU.RAN (Universal Non-Uniform  RANdomnumber generators) was used, which provides a very flexible and fast way to generate randomdistributions with applied structure, but needs correct tuning to work for almost arbitrarydistributions.

    Furthermore, histograms after application of suppression lengths (dead time plus pulse length)

    and correlations are generated. These should conform to  TDC  spectra of the raw trigger andveto channels of ideal  TDCs which can record all hits of the complete simulated spill. The

    histograms show a large scale view of that data, i.e. the complete spill, such that large scaleeffects of dead times and correlation might be seen. Another histogram of that type is generatedfor the “effective triggers”, i.e. the triggers which have not been suppressed by a veto. Finally, aTDC spectrum showing the time difference between trigger and veto is calculated: It allows fora look at the fine-grained structure and shows a gap generated by the dead times of trigger andveto. This is also the first histogram in which fine-grained structures in the order of nanosecondscan be seen.

    The simulation software also implements several different modes tailored for the measurement-methods used in this thesis. It is possible to switch between those as needed. As the measurement-process itself is simulated, these calculations take longest: The recorded  TDC-data is sampled inthe time steps the hardware can achieve, and the method described in section  ..   implies that

    this is done with the signal train delayed with different time steps. The software implementationminimizes the necessary calculations as far as possible, but the usage of the   MHz  scalingclock of the simulated hardware which effectively checks all the inputs in intervals of   ns  inparallel implies that the simulation of a complete spill with a length in the order of secondsneeds a large amount of comparisons to be done.

    The number of comparisons is limited according to the mode in use: As it was not very clearfrom the beginning whether structures can be better made out using a   !XOR() or  AND() to

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    25/113

    .. RESULTS FOR CROSSCHECKING CHAPTER . SIMULATION

    measure autocorrelation   ,  the GUI offers two methods which can be enabled and disabledat will: “Autocorrelation   -” and “Autocorrelation  -”. The first method only counts thecoincidences of the signal and the delayed signal if both are logically “one”, i.e. the discriminatorwould output a logical “high”. This method works generally very fast if the rates and dead times

    are not too far away from reality, i.e. the number of total “high” pulses and thus the number of comparisons needed is limited to a realistic value. Selecting this method alone corresponds tothe logical  AND()   in hardware.

    1 0

    1 1 00 0 0

    (a)   AND()

    1 0

    1 0 00 0 1

    (b)   !OR()

    1 0

    1 1 00 0 1

    (c)   !XOR()

    Table  ..:  Truth Tables

    The “Autocorrelation  -” only counts the coincidences of the signal to the delayed signalif both are logically “low”. As visualized in table   .,   this would correspond to a logical!OR()  (inverted  OR()), but in combination with the first method   corresponds to a logical!XOR()=OR(AND(),!OR())  which is — as explained in section   ..   — close to the realdefinition of the autocorrelation, limited by the sampling rate. Simulating this method means

    that all periods when the signal is “low” also need to be checked against delayed “low” periods.In situations close to reality and especially in combination with the first method, this is veryslow, but still feasible. Although these simulations are complex and time consuming in bothsoftware implementation and runtime, they lead to a more detailed understanding of the method

    used for measurement and could expose unexpected pitfalls.The histogram implemented last shows the veto autocorrelation directly calculated with thesimulated TDC-data. To do so, all leading edges of simulated veto events are subtracted in agiven time-window and filled into the histogram.

    Depending on the speed of the machine in use and the options enabled, the “ContinuousAnalysis” switch in the software allows for a near-immediate response to change of parametersin all histograms. The calculations and the GUI are running in separate threads, so it istheoretically possible, but not always feasible, to interact with the software at all times.

    .. Results for Crosschecking

    For the test run, the first version which only produced histograms of the simulated measurementswas used. It was not yet equipped with a GUI and the TDC-autocorrelation and raw-datahistograms. This version was sufficient to show that the methods of measurement made sense,as the first results of the simulation conformed with the expectations.

    see section  ..  for the implementation and the choice of these logical functionsif both are active, their results are added together which corresponds to an  OR()

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    26/113

    CHAPTER . SIMULATION   .. RESULTS FOR CROSSCHECKING

    Time in Spill / s0.0 0.2 0.4 0.6 0.8 1.0

       R  e   l  a   t   i  v  e   D  e  a

       d   T   i  m  e   /   %

    15.0

    15.5

    16.0

    16.5

    17.0

    17.5

    18.0

    18.5

    19.0

    19.5

    20.0

    Relative Dead Time of Veto Channel

    Figure  ..:  Simulation of the evolution of veto duty cycle ≡ vetodead time over a spill of   s using the method described in sec-

    tion  ...

    The histogram in fig.  . showsthe duty cycle of the veto, which,as explained in section  .., cor-responds to the relative dead time.

    The example data was generatedusing the parameters given intable  .   (at this stage of devel-opment still hardcoded into thesimulation library), so a mostlyconstant duty cycle with only ran-dom structure added was expec-ted. The sine-function used formodulating the rate is not seenhere, as the scale is by far toorough: It is in the order of seconds

    while the structure should be seenon a scale in the order of nano-

    seconds. The scale is given by readout limitations for the hardware, so the simulation isconsistent with expectations.

    Time in Spill / s0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

        /  n  s

           τ

       D  e   l  a  y

    0

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    autoCorrelation

    Entries 160000

    0

    10

    20

    30

    40

    50

    60

    70

    80autoCorrelation

    Entries 160000

    Autocorrelation of Veto channel

    Entries 3913

     / nsτDelay0 200 400 600 800 1000 1200 1400 1600

    0

    10

    20

    30

    40

    50

    60

    70

    80Entries 3913

    Autocorrelation of Veto channel

    Figure  ..:  Simulation of the autocorrelation measurement. The left picture shows the evolution of theautocorrelation over time in spill, while the right picture shows a single time-slice taken from one readoutcycle somewhere in the simulated spill. Details are explained in section  ..

    As shown in fig.  ., the applied sine-structure (see table  .) can be observed very well inthe results of the simulated autocorrelation measurement. The left histogram shows the time inspill on the  x-axis, the delay  τ  used for the autocorrelation measurement on the  y -axis and the

    autocorrelation in percent as the colour index. The  y-projection shows a single readout-cycleselected from the time in spill, i.e. a “y-slice” of the left histogram. Slight variations in thepercentage of measured autocorrelation can be seen and are caused both by the randomness of the generated pulses in time and by the discrete sampling with its accompanying smear-outeffect, which is explained with a very simple example in section  ...

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    27/113

    .. GUI–DEVELOPMENT CHAPTER . SIMULATION

    .. GUI–Development

    Figure  ..:  A screenshot of the GUI for veto simulation.

    The observations during a look atthe first results in section  . made

    clear that the simulated time struc-ture is indeed non-trivial. Thiswarranted the development of asmall GUI-based system to allowfor changes of parameters and anear-live regeneration of all histo-grams. The implementation is doneusing UNU.RAN (Universal   Non-Uniform   RANdom number gener-ators) for the random hits,  ROOT-libraries for the histograms and Qt-

    libraries for the GUI itself. These lib-raries were chosen as they integratewell within the   ROOT-frameworkand are at the same time widelyused both in other community andparticle physics-related projects. Assuch, extensive documentation andespecially many exemplary codesnippets and community support canbe expected to be available.

    Due to the combination of these

    libraries, it is for example possible toexport the simulated data to stand-ard  ROOT-trees for further analysis. In addition to that, the full interpreting features of ROOT can be used, mainly the runtime-evaluation of a string defining a  TF–function. Thismeans that the function modulating the simulated rate can be changed at runtime and allpossibilities already implemented in the  TF-class can be used. The GUI itself is dominated bythe parameter fields and the integrated ROOT–histograms, as can be seen in fig.  ..

    The variables that can be adjusted at runtime including example values are given in table ..It is clear that for specific changes of values, different parts of the simulated data have to beregenerated. This is guaranteed by a dependency-logic which has been added to the simulationlibrary: Data is preserved in a separate memory array before each step outlaid in section  . as

    a trade-off to save CPU time and thus waiting time of the user. As soon as a value is changed,the simulation library is informed and invalidates the corresponding data, i.e. it is marked forregeneration. As soon as the next request to generate histograms is triggered, all simulationsteps are called in the correct order again and check whether their data has been invalidated,only recreating it if necessary.

    In many cases, this can save a large amount of processing time, mainly because the sorting of the simulated hits in time can be skipped if neither the rate, its modulation function nor thecorrelation between the channels are changed.

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    28/113

    CHAPTER . SIMULATION   .. SIMULATIONS WITH THE GUI

    rate(raw triggers)   1MHzrate(raw vetos)   5MHzveto channels   16

    RF sine structure of veto rate  using  TF-functionslightly varying trigger rate  using  TF-function

    trigger gate   nstrigger dead time   nsveto gate   nsveto dead time   nsspill duration   s

    correlation veto – trigger   %correlation veto – neighbouring veto   %

    scaling clock   MHz

    scaler readout   kHzdelayed coincidences   8at   ns,  ns,   ns,  ns,   ns,   ns,  ns,   ns

    Table  ..:  Variables for Veto simulation

    .. Simulations with the GUI

    hTriggers

    Entries 1000000

    Time / s0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

       T  r   i  g  g  e  r   R  a   t  e

    900

    950

    1000

    1050

    1100

    1150

    1200

    hTriggers

    Entries 1000000

    Raw Random Triggers

    hVetos

    Entries 5000000

    Time / s

    0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

       V  e   t  o

       R  a   t  e

    4700

    4800

    4900

    5000

    5100

    5200

    5300

    5400

    hVetos

    Entries 5000000

    Raw Random Vetos

    Figure  ..:  Raw data for simulated trigger and veto leading edges for a spill of   s.

    The raw data histograms shown in fig.  . look essentially flat, an RF-structure applied to

    the channels is not visible at the large time scale chosen for these plots.The rate histograms after application of dead times and pulse lengths also look flat in essence,

    but due to the application of suppression lengths (dead times plus pulse lengths), the rate hasbeen reduced especially in high-rate regions (the approximate time distance between leadingedges is lower in high-rate regions, so it is more likely that neighbouring leading edges aresuppressed) and reduced by a smaller factor in low-rate regions. This means that the height of the bins is slightly equalized. This effect can be seen best when looking at the  y -axes: Not only

    the maximum, but also the covered range is greatly reduced. The random-structure survives

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    29/113

    .. SIMULATIONS WITH THE GUI CHAPTER . SIMULATION

    hnoTGTriggers

    Entries 451381

    Time / s0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

       T  r   i  g  g  e  r   R  a   t  e

    400

    420

    440

    460

    480

    500

    520

    540

    560

    hnoTGTriggers

    Entries 451381

    TDC Random Triggers

    hnoVGVetos

    Entries 2461380

    Time / s

    0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

       V  e   t  o   R  a   t  e

    2200

    2300

    2400

    2500

    2600

    2700

    2800

    hnoVGVetos

    Entries 2461380

    TDC Random Vetos

    Figure  ..:  Data for simulated trigger and veto leading edges for a spill of   s after application of deadtimes and pulse lengths. The histograms thus show the simulated rate.

    this compression, as expected. These plots show the first simulated data which might also be

    measured with a hypothetical piece of hardware: A TDC which records all leading edges of acomplete spill.

    hseenTriggers

    Entries 298292

    Time / s

    0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

       T  r   i  g  g  e  r   R  a   t  e

    260

    280

    300

    320

    340

    360 hseenTriggers

    Entries 298292

    Effective Triggers

    Figure ..: The simulated rate for “seen triggers”,i.e. non-vetoed triggers

    hTriggerVeto

    Entries 3.6854e+07

    Time difference / ns-500 -400 -300 -200 -100 0 100 200 300 400 500

          c      o      u      n       t      s

    26000

    28000

    30000

    32000

    34000

    36000

    38000

    40000

    42000

    hTriggerVeto

    Entries 3.6854e+07

    Trigger-Veto

    Figure ..:  A high-rate artifact in the TDC spec-trum “trigger time - veto time”

    Data which might be seen in the real experiment, neglecting further dead times introducede.g. by the DAQ, is shown in fig.  ..  The rate is greatly reduced by the impact of the veto, butat the large timescale, an impact on randomness or a bias is not noticeable and the availableTDCs do not allow to look at a larger time window on the nanosecond-scale. However, a biasintroduced in the small timescale can as well create a bias on the physics data taken, and forthat reason, a measurement resolving such details is needed. Such biases showing up in a smallertimescale in the order of nanoseconds appear in the autocorrelation data which is also simulated.

    Using the simulation-GUI, it is easy to select different parameters and have a look at the newresults. This enables the user to find special dead time effects which are not straightforward.One such example is given in fig.  .:  The plot shows the TDC  spectrum of “trigger time - vetotime” for very high simulated rate. In fact, this is a saturation effect caused by the simulated

    raw rate being high enough to cause immediate re-triggering after dead time and veto lengthhave passed in almost all cases. As such, this effect is mirrored in both time directions, as thesystem is in saturation and the dead time and pulse length of the preceding pulse can be seen,too. Due to still present randomness, the next pulse in time does not show the same clear shape,

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    30/113

    CHAPTER . SIMULATION   .. SIMULATIONS WITH THE GUI

    as the time variations between not next-neighbouring pulses increase with their distance. In theextreme case of an infinite rate, this plot should show completely rectangular behaviour with allrandomness gone.

    Page    Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    31/113

    . Firmware Development

    The first approach to measure duty cycle and autocorrelation would normally be to record allpulses in time and perform the (auto)correlation in software, as can be done in the simulation.However, in the current setup at  Compass, the available  TDCs can only store hits severalnanoseconds around the prompt peak due to the high rate. As such, another kind of independenthardware which can provide a constant recording without being hit by any buffer overflow hasto be used.

    In the following sections, the needed requirements for the new module are discussed andsimple methods of measurement in an FPGA design are developed. Finally, a new firmware for

    the already available  Compass Veto Coincidence Board will be developed and presented.

    .. Requirements

    The hardware module needs to be able to resolve both time structures in the order of nanosecondsor below and at the same time provide a method to constantly sample the logic inputs.Furthermore, reconfigurability at runtime and an additional, dedicated readout are desired tocheck on a subset of the recorded data without decoding a complete  Compass dataset.

    .. Hardware

    All the requirements can be easily met by an FPGA board. The  Compass  veto coincidenceVME-board was available for this project, featuring a Xilinx® Spartan®  “XCS” FPGAand   delay chips connected to NIM-connectors to allow for adjustable delays in steps of   ps.It is equipped with a  MHz–quartz as clock source,   NIM-inputs and   NIM-IOs connectedto the mentioned delay chips, an additional    NIM-inputs and   NIM-outputs meant for controlsignals and     IO-connectors for the same control signals allowing to connect it to anotherCATCH-module via flat cable.

    In addition to the IO-connectors,    status–LEDs are available on the front panel which informabout electronic faults, the configuration status and the availability of voltages. The board isalso equipped with a hotplug controller which is triggered by the two front handles of the board:

    If these are snapped into position, the voltage of  V is switched on and the hotplug-event istriggered. The hotplug controller then passes the other voltages to the electronics, checking forovercurrent and signalling with the  F-LED on front if encountered.

    A problem with the hardware, which was not well-known from the beginning, but had alreadycaused problems in the original use case, is that the voltage level at which the   NIM-inputssample the input as “high” / “low” does not conform to the  NIM-specification. As will beexplained in detail in section  .., the impact of this problem is enhanced for the measurementsdone.

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    32/113

    CHAPTER . FIRMWARE DEVELOPMENT   .. METHODS OF MEASUREMENT

    Figure ..:  The coincidence board used at Compass to generate the veto pulses without introducing furtherunwanted dead times.

    A photography of one coincidence–board also used during firmware development in this thesisis shown in fig.  ..

    .. Methods of Measurement

    The methods described in section  . have to be slightly adapted for implementation, namely bylimitations of the used hardware. The plans for implementation are laid out in the followingsubsections, the final implementation itself is described in section  ..

    ... Duty Cycle

    The first idea is to measure the duty cycle using an independent clock to sample the input at

    constant intervals. Counting the times the input was found to be in the “high” state and the

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    33/113

    .. EXISTING FIRMWARE CHAPTER . FIRMWARE DEVELOPMENT

    total clock cycles, this breaks down to:

    D =  clock× veto

    total clock cycles

    Due to the sampling with the independent clock, a pulse of the same length may sometimesbe counted once and at another time be counted several times. Short pulses, shorter than thesampling clock’s period, may even be missed using this method. For the high statistics available,these effects vanish and the average duty cycle over the interval is determined.

    In an FPGA-design, the sampling approach is by far easier to implement in terms of correcttiming and space consumption due to the possibility to use a completely clocked design afterinitial sampling.

    It should be noted that it is in general  not sufficient  to assume a constant pulse length andmultiply it with a rate measured with a digital counter. This is especially true for the  Compassveto. The differing results will be shown and discussed in section  ...

    ... Autocorrelation

    For the autocorrelation measurement, different time delays  τ  need to be applied to a signal train.In an FPGA without integrated IO-delays this can be done either using carry-chain delays orsimple shift registers.

    Due to the available delay chips which can already shift the signal sequence in steps of   ps,the default approach of shifting via carry-chain delays is not needed, as the granularity wouldbe higher for the used FPGA. Using carry-chains, the delay would also depend on the routing,which would have to be fixed, environmental conditions, which one would have to monitor,and the hardware itself, as the carry chain delay is not of a constant and well-defined valuethroughout the FPGA.

    Applying a time lag using shift registers is desirable to extend the time lag achievable with thedelay chips to a virtually infinite length only limited by the size of the FPGA. As the maximumdelay that can be achieved with one delay chip corresponds to slightly more than one clockcycle in the FPGA using a  MHz clock, this enables the choice of virtually any time lag witha theoretical granularity of   ps. However, this also implies that sampling is used, as a clockedshift register can only store the values sampled at the leading edges of the used clock signal.This results in a discretization of the autocorrelation, which is only a small disadvantage takingthe needed resolution into account, but at the same time simplifies the design a lot.

    .. Existing Firmware

    To start developing the new firmware, the existing firmware of the  FPGA should be used as abasis to start from a working bus interface and stay mostly compatible to the CATCH-boards

    at  Compass. The old firmware project had been created as a schematic in Xilinx® FoundationSeries  ..i which predates the current Xilinx® ISE® Webpack™ and the new Xilinx® Vivado™

    Design Suite. Not only is the workflow different from that in the modern environment, butin addition to that, this software-release is not freely available and only runs on a few oldinstallations or in a virtual machine. It is also not redistributed as a “Xilinx   ® Classic” orlegacy release, as it was one of the first products by Xilinx® and still contains parts of Aldec™

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    34/113

    CHAPTER . FIRMWARE DEVELOPMENT   .. NEW IMPLEMENTATION

    software which cannot be distributed freely by Xilinx®. As such and in the light of furtherextensibility, it is feasible to   translate   the old project to a more recent HDL-base. In thisthesis, mostly VHDL is used as description language. The project can be exported using the“ACTIVE-CAD--VHDL”-process from the old software-version to a systematically generated,

    completely structural VHDL-code. Using this    lines long automatically generated codeand converting it manually to Behavioural   VHDL, a working version of the old design hasbeen generated and the old design has been understood in detail. It basically implemented theveto-trigger-gating as shown in fig.  .  for several inputs, including coincidences and scalers forboth, and was equipped with a  VME-interface for readout and runtime-configuration of thedelay chips.

    To ensure correct timing of the VME-interface on the FPGA, it has been left mostly as-is.The remaining layout included   unclocked scalers instantiated using a Xilinx®-implementation,connected to count on the   NIM-inputs and the three provided coincidences,    to be read outwithin spill and    to be read out at end of spill. Furthermore, a selection of one of the   delaychips via a subset of the board’s VME-address space is possible: The corresponding address-bits

    from the VME–bus are directly routed through the FPGA to an external  -to--bit decoder toactivate the chosen chip-enable–line of the delay chip. The    bit data-lines to configure thedelay chips are also routed through the FPGA. A special feature of the board is that the lower  NIM-IO-ports can be used as outputs (delayed via the   delay chips) and at the same timeas inputs, while the input-signal is taken after the output has passed through the delay chip.As such, adjustable external delays can be generated, which is used to reset the coincidenceflip-flops in the original design (c.f. [Tri]).

    The bulk part of the converted lines were the BUS-IO, namely the data registers fromintegrated scalers, which were bitwise combined via  OR()-gates, and the scalers themselves.The scalers were not changed in the process of converting the design, as they were also availableas “black box” instantiation from Xilinx®-libraries. The BUS-IO was shrinked to a few lines

    using Behavioural VHDL and  generate  statements.Based on this converted design, it was then possible to modularize the  VHDL-code in a way

    that the board-specific VME-interface and the routing of the address-bits for programming thedelay chips are now available as separate HDL-modules.

    .. New Implementation

    To realize the ideas laid out in section  ..  and section  ..   in hardware one has to thinkabout how to achieve correct timings.

    Based on the converted design, it was possible to increase the used clock (MHz  generated

    by an on-board quartz) via a  DLL on the FPGA to  MHz

    , which is about the maximumclock realistically achievable with a design on the used Spartan®  FPGA, and use this clock forthe complete design, including the VME-interface. The next step was to reduce the design to aminimal one, only containing the module to address the delay chips (keeping a similar address-space as in the original design) and the VME-interface. Slight modifications to the minimaldesign allowed compatibility with the VHDL- and Verilog-modules which were developed at

    A DCM  or  PLL  are not available on the Spartan® .

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    35/113

    .. NEW IMPLEMENTATION CHAPTER . FIRMWARE DEVELOPMENT

    the BGO-OD experiment in Bonn. A short introduction into the principles used to allow thismodular approach is given in section  ...

    ... Duty Cycle Implementation

    The duty cycle of two given signals is measured as described in section  ... For that reason,an implementation of a clocked scaler is needed, counting the unclipped input signals. Thecounting clock is the fastest clock available which is uncorrelated to the inputs to maximize the

    sampling rate. As such, the   MHz  clock is used. The initial sampling at the FPGA inputis also done using a flip-flop clocked with the same  MHz  source to ensure a clocked path,so that the timing is automatically checked by the placing and routing software to preventviolation of setup and hold times.

    ... Autocorrelation Implementation

    Figure   ..:   Implementation of the autocorrelation measurement in hardware: The independent scalingclock is used to sample coincidences between signal and delayed signal. Both these coincidences and theclock cycles are counted with internal, or, as in this picture, external scalers.

    The realization of the autocorrelation measurement as described in section  ..  also relieson initial sampling of the incoming signal trains using the  MHz   clock. After that, a shiftregister is attached to each input. At every flip-flop of each  shift register (from now on thesewill be addressed as ‘taps’ of the register), logic is attached to compare the data output pin

    with the sampled signal from the undelayed reference inputs. For flexibility, both a simpleAND()  and the   !XOR()  have been included in the design as functions for this comparison.Although the usage of the  AND()  will not give a “percentage” of autocorrelation, it is sufficientto characterize repetitive structures in the logical input signals and for reasons laid out duringanalysis in chapter   shows advantages during measurement.

    The used method (AND()/!XOR()) can be selected at runtime to allow to check for suchproblems without rerouting. As the space on the FPGA is finite, only a limited amount of scalers can be included, and another logic is necessary to choose which of these are connected

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    36/113

    CHAPTER . FIRMWARE DEVELOPMENT   .. NEW IMPLEMENTATION

    to which of the shift register taps. The implementation of this logic is described in the nextchapter.

    ... Sweeping Logic

      entity   vetocnt   is

      generic   (

      tapDepth   :   positive   :=   ;

      -- tapDepth has to be decodable from      bit due to tap-selection-encoding.

      -- It also has to be divisible by the number of scalers per channel

      -- ( 1   in   Compass   setup).   tapDecBits   :   positive   :=   ;

      -- the really needed amount of tap-selection-bits per scaler 

      -- max.   32/4 = 8   bits.

      scalerBits   :   positive   :=   ;   -- size of the used scalers in bits   scalerCnt   :   positive   :=  

      -- has to be a multiple of     due to tap-selection-encoding 

      -- and has to be a multiple of      to be equally distributed to channels

      --      is for   Compass   setup so outputs are mapped correctly,

      -- internal scalers in special mapping for End-of- burst   readout

      );

      port   (

      ...

      );

      end   vetocnt;

    Listing  :  Definition of the VHDL-generics for the new firmware implementation

    To achieve a full coverage of the shift register taps with scalers, they are effectively “reconnected”in a sweeping fashion to the different taps. The reconnection is achieved by  LUTs which performdifferent internal mappings for a given address value.

    The implemented VHDL-code automatically generates scalers, latches and registers accordingto given generic variables, which can be varied easily to make best use of the full FPGA. Adefined number of scalers per input channel has to be chosen, the algorithm then divides thetaps per channel equally among the available scalers. The selection of the used tap from the

    scaler’s subset is performed using an  -bit wide block of a  -bit wide data register for eachsingle scaler (c.f. fig. .). The algorithm takes care to automatically generate enough dataregisters using this  tap-selection-encoding .

    This decoding process is implemented using a VHDL-function from the  IEEE.numeric_std-library which is available in most  VHDL-implementations. It allows to interpret a vector arrayof bits, in this case, the  -bit wide block giving the tap-address, as unsigned and convert it toan integer, which can be used to access registers from an array type. This effectively constructsan address decoder automatically scaled in size during synthesis.

    Page     Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    37/113

    .. NEW IMPLEMENTATION CHAPTER . FIRMWARE DEVELOPMENT

    Figure  ..:   Schematic showing a simplified version of the complete daCoMo–implementation (dead time(auto-)Correlation  Module). It includes channel mapping, initial sampling flipflops, shift registers withadjustable depths, two logical functions for (auto)correlation, internal scalers and outputs for duty cyclesas well as correlation / autocorrelation measurement for connecting to an external scaler module. Theclock signal is also scaled internally and provided via an output.

    In the present implementation, no timing constraints are lifted for the logic   reconnecting  the

    scalers, which limits the maximum taps per scaler to about 

    . This is not a problem, for itallows for delays up to  ns  even when only a single scaler per channel is used. In additionto the sweeping of scalers, the first selected tap per input channel is always connected to acorresponding NIM-output.

    The generics which can be configured at synthesis-time are shown in listing  . These values areused to scale the implementation itself by controlling the  generate statements in the VHDL-code.

    The schematic implementation can be seen in fig.  .. It shows two input signals arriving on   inputs each. Each input is sampled with the  MHz  clock close to the input and  pipeliningfor two clock cycles is used. This allows the placing software to position the incoming signals atdifferent places inside the FPGA before they are distributed using fan-outs, which simplifies therouting greatly. Otherwise, the logic would need a placement close to the specific inputs and

    especially the reference signals would generate timing problems due to their high fan-out-level.For each signal, one input is used as this reference line, as shown in the corresponding colourin the picture. The other four inputs are delayed with the delay chips, sampled and then fedinto the shift registers.  The simplified schematic shows shift registers with adjustable depths,which were in fact realized by the  tap-selection-encoding  that was just described. The schematicshows that all paths after the initial sampling are fully clocked — the  tap-selection-encoding  notincluded in the diagram is also limited to a maximum logic path of one clock cycle because notiming constraints were lifted.

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    38/113

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    39/113

    .. NEW IMPLEMENTATION CHAPTER . FIRMWARE DEVELOPMENT

    address of each module, it is possible to implement complete register-modules accessible viaVME without caring about implementation details.

    This modular approach allows for mixed-language synthesis (Verilog and  VHDL are used)and in most cases allows for a platform-independent and collaborative development. As long as

    clocked designs are used and / or the used constraints are generally applicable and complete, themodule is completely portable, assuming synthesis, placing and routing are working correctly.This is especially helpful because the modules can be reused in many designs and errors can bedetected and fixed at a single place without duplicating code.

    After adapting the VME-bus interface and especially completing the I/O-pinning to includeall connected pins of the veto coincidence module used in this thesis, nearly all modules wereinstantly usable for this platform. Only a few modules rely on architectural features and wouldhave to be adapted for a different base. Nevertheless, some care has to be taken for eachplatform, as e.g. some bits of the address bus should be avoided for the veto coincidence boarddue to compatibility of the CPLD with other CATCH–boards. If such board-specifics are takencare of, the modules work as expected.

    As part of this thesis, a small module fitting into this scheme to program the board-specificdelay chips has been developed. On the other hand, working implementations of clocked scalers(c.f. section  ..), datalatches and registers were taken from the repository and – as no copy

    was created, but links to the files were included – now automatically receive any updates madeto the repository.

    ... The Slimfast Multioption Counter

    The slimfast counter has been developed in Bonn by John Bieling and is part of the repositorydescribed in section  ...   It is suitable for any FPGA-based designs involving clocked scalersand very portable. It does not impose tight routing  constraints, thus lifting some spatial

    limitations of the used FPGA which could otherwise not support as many scalers as will beused in this thesis.

    In this section, only a short overview of the implementation is presented. A detailed descriptionof the design is available in the internal notes of the BGO-OD experiment.

    Figure  ..:   Schematic of the slimfast–implementation (by courtesy of John Bieling)

    A simplified schematic is shown in fig.   .. The only part which relies on tight timingconstraints is the  fastcnts –counter: It is incremented in the worst-case scenario for each clock

    Dead Time Measurement at Compass   Oliver Freyermuth Page  

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    40/113

    CHAPTER . FIRMWARE DEVELOPMENT   .. NEW IMPLEMENTATION

    cycle. The overflow-check is done in expectation of the next counting cycle taking the   fastcnts into account, such that as soon as the next counting leading edge arrives, an overflow-pulse canbe used as CE (clock   enable) for the  slowcnts –counter. This overflow-check is also a simplesingle-cycle operation which can be implemented using a LUT.

    This overflow can only occur, in the worst case, every    clock cycles, as the fastcnts –counter is bit wide. For that reason, the maximum effective counting clock for the  slowcnts –counter is   1/8of the input counting clock. This is the main feature of this slimfast–implementation: Clockedsynchronous counters with large widths, such as    bits with one additional overflow bit, areoften needed, but impossible to implement at full clock speed by using the naïve “+ ” approach.  Even if smaller widths are used tighter routing  constraints would have to be enforced. In theslimfast implementation, the  slowcnts –counter is incremented using a multi-cycle path that maytake up to    clock cycles, which allows the placer and router to distribute the elements of thecounter more widely allowing for more effective space consumption.

    Additional features of the slimfast module include the optional possibilities to clip the“count” and “reset” inputs and a freely adjustable width. It is in general combined with an

    arbitrary-width datalatch module, which allows for clipping of the “latch”–input and adjustablepipelining-steps at in- and output, which effectively delays the data in steps of clock cycles, alsolifting routing constraints.

    In comparison with the Xilinx® counter implementation which was used before in thecoincidence-modules, the major advantage is that the counter does not have to be placedin a carry chain: The Xilinx® counters were used in an unclocked way and placed completelyinside the carry-chain of the FPGA, which means that one “column” of adjoined logic blocks isneeded. As some newer FPGAs like the Spartan®  are equipped with a reduced number of carry chains, the slimfast implementation allows for much greater flexibility.

    ... Cabling

    The incoming signals have to be routed through a fan-out to allow for comparison of the signalsto themselves and also to each other. For that reason, at least two  -to- fan-outs are needed.The used hardware is known to cause problems even when the input-levels are safe within theNIM-specification. This can in general cause missed pulses, but the used methods are also verysensitive to the length of the pulses. For that reason, many fan-outs might cause problems withthe measurement if the outputs are not well decoupled from each other.

    This was one of the major problems during development, as the effects were not equal onall inputs and first identified as internal timing problems on the FPGA before finally trackingdown the problems to the input sampling of the board in hardware.

    Apart from this “bug” which has to be taken care of, the measurement relies on the delay

    between fan-out and board inputs to be the same for all cables which belong to one originalsignal. The delay between the two signals themselves can be seen as an offset in the correlationmeasurement.

    In a HDL, a counter can be implemented as simple as writing “variable=variable+” without caring about the

    actual implementation. This normally results in a synchronous counter without lifting timing constraints.

    Page    Oliver Freyermuth   Dead Time Measurement at Compass

  • 8/18/2019 Master EDM Freyermuth Oliver 2012

    41/113

    .. STAGES OF DEVELOPMENT CHAPTER . FIRMWARE DEVELOPMENT

    .. Stages of Development

    During the firmware development, several stages of implementation were done and testedseparately. A tested and known-good scaler-implementation was used, the slightly rewritten

    bus interface was checked using test registers and testing with known periodic signal sequencesfrom frequency generators was performed (c.f. section  ..). Finally, it was possible to takedata during beam times of the BGO-OD experiment at Bonn and crosscheck the results withTDC-data.

    ... Testing Phases

    To test the implemented firmware, a lightweight DAQ mostly code-compatible to the imple-mentation made for the BGO-OD–DAQ (c.f. section  ..) is developed constantly readingout all scalers and regularly changing the used taps of the   shift registers.  Using this methodand a known input signal sequence, functional verification is quickly possible. A ROOT-tree

    was used as intermediate data storage to allow for a quick analysis and stay close to theimplementation for the ROOT-based analysis-framework used in the BGO-OD experiment (c.f.section  ..).

    The first tests of the implementation were still done with unvisualized raw data shifting theused taps manually, which already allowed for the detection of periodicities and a confirmationthat the methods of measurement are also working in the hardware implementation. Before thetesting was complete and the visualization was done, the chance to take data in the Februarybeam time in Bonn was taken using the setup described in section  ..

    The resulting datasets were visualized, but did not show the expected behaviour. Only a fewwell-known expected structures could be identified. The complete results were presented as partof the DPG meeting    in Mainz [Fre] and are described in detail in section  ...

    In preparation of the presentation, it was already possible to continue the testing processincluding finding and fixing some remaining problems in the firmware. At this time, it waspossible to correctly resolve a known periodic signal train using the minimal DAQ as can beseen in fig.  .. The plot already shows the major features of the measurement: The incomingpulses are pure digital logic-pulses, but due to the nature of the measurement, the samplingwith a frequency independent from the frequency of the meas