development of monolithic pixel sensors for alice experiment 1.6.6 elliptic flow at lhc ... 3.4.4...
TRANSCRIPT
IVAN RAVASENGA
DEVELOPMENT OF MONOLITHIC PIXEL SENSORS
FOR ALICE EXPERIMENT
Tesi magistrale in Fisica
Relatrice: prof.sa Stefania Beolè
Secondo Relatore: Dott. Yasser Corrales Morrales
Controrelatore: prof. Marco Costa
Università di Torino
Dipartimento di Fisica
Torino, 14 Ottobre 2015
2
Alla mia famiglia
e a tutte le persone che mi hanno sostenuto
3
Contents Introduction ....................................................................................................................................................... 7
Chapter 1: Heavy ion collisions physics ............................................................................................................. 9
1.1 The Quantum Chromodynamics ............................................................................................................... 9
1.2 Running coupling constant in QED and QCD ............................................................................................ 9
1.3 Big Bang model ....................................................................................................................................... 11
1.4 Transition phase diagram ....................................................................................................................... 13
1.5 QGP predictions ...................................................................................................................................... 14
1.6 Heavy ion collisions ................................................................................................................................ 14
1.6.1 Collision geometry ........................................................................................................................... 16
1.6.2 Collision details and space-time evolution ...................................................................................... 17
1.6.3 Particle multiplicity .......................................................................................................................... 20
1.6.4 Particle spectra and radial flow ....................................................................................................... 22
1.6.5 Anisotropic transverse flow ............................................................................................................. 24
1.6.6 Elliptic flow at LHC ........................................................................................................................... 27
1.6.7 Jet quenching and ℛ𝐴𝐴 .................................................................................................................... 27
1.6.8 Hints on charmonium suppression .................................................................................................. 32
Chapter 2: ALICE ITS Upgrade .......................................................................................................................... 34
2.1 Current ITS .............................................................................................................................................. 35
2.1.1 Silicon Pixel Detector ....................................................................................................................... 36
2.1.2 Silicon Drift Detector ....................................................................................................................... 37
2.1.3 Silicon Strip Detector ....................................................................................................................... 38
2.2 Physics motivation for the ITS upgrade .................................................................................................. 38
2.3 Current ITS limitations ............................................................................................................................ 39
2.4 ITS upgrade overview ............................................................................................................................. 41
2.4.1 Detector layout overview ................................................................................................................ 42
2.4.2 Experimental conditions and running environment ....................................................................... 44
2.5 Pixel Chip ................................................................................................................................................ 44
2.5.1 Choice of Pixel Chip technology ...................................................................................................... 45
2.5.2 Pixel Chip development ................................................................................................................... 45
2.5.3 Particle detection ............................................................................................................................ 46
2.5.4 General requirements on the Pixel Chip ......................................................................................... 47
2.6 Pixel architectures .................................................................................................................................. 48
2.6.1 MISTRAL: a modular design ............................................................................................................. 49
2.6.2 MISTRAL: readout mode ................................................................................................................. 49
4
2.6.3 ALPIDE: general approach ............................................................................................................... 50
2.6.4 Hints on Priority Encoder ................................................................................................................ 51
2.6.5 ALPIDE: readout mode summary and parameters .......................................................................... 53
2.6.6 ALPIDE: beam test measurements .................................................................................................. 55
2.6.7 MISTRAL & ALPIDE: summary and comparison ............................................................................... 56
2.7 Modules and Staves in the OB and IB ..................................................................................................... 56
2.8 Flex Printed Circuit .................................................................................................................................. 57
2.8.1 FPC for Inner Barrel Stave ............................................................................................................... 58
2.8.2 FPC for Outer Barrel Stave............................................................................................................... 59
2.8.3 Pixel Chip to FPC connection ........................................................................................................... 60
2.9 Power Bus ............................................................................................................................................... 62
2.10 Stave configuration summary ............................................................................................................... 62
Chapter 3: Test of the FPC for the Outer Barrel Stave .................................................................................... 65
3.1 Test objectives ........................................................................................................................................ 65
3.2 Tested FPC .............................................................................................................................................. 65
3.3 Experiment steps .................................................................................................................................... 66
3.4 Eye diagram and BER measurements ..................................................................................................... 67
3.4.1 Bit Error Rate ................................................................................................................................... 67
3.4.2 Eye diagram ..................................................................................................................................... 67
3.4.3 Experimental setup ......................................................................................................................... 70
3.4.4 Measurement procedure ................................................................................................................ 71
3.4.5 Eye diagram and BER main results .................................................................................................. 72
3.5 Jitter measurements ............................................................................................................................... 77
3.5.1 A jitter definition and parametrization ........................................................................................... 77
3.5.2 Experimental setup and procedure ................................................................................................. 78
3.5.3 Experimental results ........................................................................................................................ 78
3.6 Conclusions ............................................................................................................................................. 82
Chapter 4: Study of the chip analog and digital parameters ......................................................................... 84
4.1 pALPIDEfs-v1 summary and pixel organization ...................................................................................... 84
4.2 In-pixel structure .................................................................................................................................... 86
4.2.1 Analog front-end section ................................................................................................................. 87
4.2.2 Digital front-end section .................................................................................................................. 87
4.3 Pixel indexing .......................................................................................................................................... 89
4.4 Pixel sector details .................................................................................................................................. 90
4.5 Test boards ............................................................................................................................................. 90
5
4.6 Chip software .......................................................................................................................................... 92
4.7 Test objectives ........................................................................................................................................ 93
4.8 First test: Threshold and Noise Occupancy Scan at different voltages .................................................. 94
4.8.1 Test procedure ................................................................................................................................ 94
4.8.2 Experimental results on VDDD variation ......................................................................................... 98
4.8.3 Experimental results on VDDA variation ....................................................................................... 101
4.8.4 Experimental results on VDDD&VDDA variation ........................................................................... 104
4.8.5 General conclusions for all cases ................................................................................................... 107
4.9 Second test: Threshold and Noise Occupancy Scan without the decoupling capacitors on DVDD ..... 107
4.9.1 Test procedure .............................................................................................................................. 107
4.9.2 Experimental results ...................................................................................................................... 108
4.9.3 Second test conclusions ................................................................................................................ 110
4.10 Third test: Threshold and Noise Occupancy Scan without the decoupling capacitors on AVDD ....... 110
4.10.1 Test procedure ............................................................................................................................ 110
4.10.2 Experimental results .................................................................................................................... 111
4.10.3 Third test conclusions .................................................................................................................. 113
4.11 Fourth test: Threshold and Noise Occupancy Scan without the filter on VREF ................................ 113
4.11.1 Test procedure ............................................................................................................................ 113
4.11.2 Experimental results .................................................................................................................... 114
4.11.3 Fourth test conclusions ............................................................................................................... 116
Chapter 5: Study of the chip response as a function of noise injection....................................................... 117
5.1 Noise injection into pALPIDEfs-v1 ........................................................................................................ 117
5.1.1 Test objective ................................................................................................................................ 117
5.1.2 Experimental setup and test procedure ........................................................................................ 117
5.1.3 Experimental results: injection in the power planes (AVDD and DVDD) ...................................... 118
5.1.4 Experimental results: injection in the PWELL (back-bias) ............................................................. 121
5.2 Noise injection into pALPIDEfs-v2 ........................................................................................................ 126
5.2.1 Comparison between ALPIDE-v1 and -v2 ...................................................................................... 126
5.2.2 Experimental results: injection in the power planes (AVDD and DVDD) ...................................... 127
5.2.3 Experimental results: injection in the PWELL (back-bias) ............................................................. 130
Conclusions .................................................................................................................................................... 134
Future plans .................................................................................................................................................. 135
Ringraziamenti ............................................................................................................................................... 136
Bibliography ................................................................................................................................................... 137
6
7
Introduction
ALICE (A Large Ion Collider Experiment) is designed to address the physics of strongly interacting
matter, and in particular the properties of the Quark-Gluon Plasma (QGP), using proton-proton,
proton-nucleus and nucleus-nucleus collisions at the CERN LHC. So the purpose is to study the
nuclear matter at high densities and temperatures.
One of the major goals of the ALICE physics program is the study of rare probes at low transverse
momentum. The reconstruction of the rare probes require a precise determination of the primary
and secondary vertices that is performed by the ALICE ITS (Inner Tracking System). The present
ITS is made of six layers of different types of silicon detectors and it allows, for example, to
reconstruct D mesons with the transverse momentum down to ~ 1 𝐺𝑒𝑉/𝑐.
The nature of the QGP as an almost-perfect liquid emerged from the experimental investigations at
CERN SPS and at BNL RHIC. ALICE has confirmed this basic picture, observing the formation of
hot hadronic matter at unprecedented values of temperatures, densities and volumes. These physics
results have been achieved by ALICE after only two years of Pb-Pb running and one p-Pb run,
demonstrating its excellent capabilities to measure high-energy nuclear collisions at the LHC.
Despite these successes there are several limitations for which the current experimental setup is not
yet fully optimized. ALICE is therefore preparing a major upgrade of its apparatus, panned for
installation during the Long Shutdown 2 (LS2) of LHC in the years 2018-2019.
The upgraded detector will have greatly improved features in terms of impact parameter resolution,
standalone tracking efficiency at low 𝑝𝑇, momentum resolution and readout capabilities. So this
upgrade will enhance ALICE physics capabilities, in particular for high precision measurements of
rare probes at low transverse momenta.
In this thesis, I will briefly describe the overall upgrade program and I will focus my attention on
the ITS upgrade.
In the first part I will present some important physics concepts regarding the heavy ion collisions
and then the new structures (circuitry, support structures, …) and detectors that will take place in
the new ITS will be presented. Finally, I will present the measurements regarding the new electrical
circuits of the ITS and the chip prototypes that will replace the silicon detectors of the current ITS.
8
9
Chapter 1: Heavy ion collisions physics
1.1 The Quantum Chromodynamics
The Quantum Chromo-Dynamics (QCD) is the gauge field theory which describes the features of
the interaction between the quarks and gluons found in hadrons in the Standard Model.
This gauge theory is based on the symmetry group SU(3), that is a non-abelian group.
In the lagrangian of this theory, we can find a term of interaction quarks-gluon and gluon-gluon (3
or 4); fig. 1 depicts the correspondent vertexes:
Figure 1: QCD interaction vertexes.
the last two vertexes are typical of a non-abelian theory. In the figure above ℊ𝑠 is the coupling
constant of the QCD. But, it is more common to find 𝛼𝑠 = ℊ𝑠2/4𝜋.
1.2 Running coupling constant in QED and QCD
In Quantum Electro-Dynamics (QED) the coupling constant is called 𝛼 = ℊ2/4𝜋. In this case, the
vacuum is a sort of a dielectric: virtual pairs 𝑒+𝑒− (vacuum polarization) screen the charge of an
electron, hence the charge depends on the distance. In fig.2 a test particle sees different charges of
the electron in the middle depending on the distance R (screening).
Figure 2: Vacuum polarization in QED
10
So, considering that 𝛼 depends on the charge of the electron (~𝑒2), the previous consideration
brings to what we call “running coupling constant”. In fact in fig. 3 we note that 𝛼 decreases with
the distance.
Figure 3: From vacuum polarization to running coupling constant
Considering the Heisenberg indetermination principle, high energies correspond to low distance and
vice versa. Hence, the coupling constant 𝛼 increases with the energy or 𝑄2 that is the transferred
momentum.
Also in QCD we have the vacuum polarization, but there is an important difference: the gluons are
colored and can interact with each other (photons can’t do this). Then, if we consider for instance a
red charge, it will be rounded by other red charges and a probe entering in this region sees less red
charge approaching to the red charge in the middle. Fig. 4 depicts this phenomenon. This effect is
called anti-screening.
Figure 4: Vacuum polarization in QCD
Considering that 𝛼𝑠 depends on the color charge, we have an opposite trend with respect to 𝛼 in
QED (fig. 5).
11
Figure 5: Running coupling constant of 𝛼𝑠
So, in QCD, the anti-screening effect causes the strong coupling to become small at short distance
(= large momentum transfered). This causes the quarks inside hadrons to behave more or less as
free particles, when probed at large enough energies. This property of the strong interaction is
called asymptotic freedom and it allows us to use perturbation theory, and by this arrive at
quantitative predictions for hard scattering cross sections in hadronic interactions. On the other
hand, at increasing distance (= small 𝑞2) the coupling becomes so strong that is impossible to
isolate a quark from a hadron. This mechanism is called confinement and it is verified in Lattice-
QCD calculations but, since it is non-perturbative, not mathematically proven from first principles.
With QCD calculations we can find:
𝛼𝑠(|𝑞2|) =𝛼0
1+𝛼0
33−2𝑛𝑓
12𝜋ln(
|𝑞2|
𝜇2 ) (2)
where 𝛼0 is the strong coupling constant for transferred momentum μ and 𝑛𝑓 is the number of quark
flavors. According to the expression (2) we can conclude that:
|𝑞2| → ∞, 𝛼𝑠(|𝑞2|) → 0: asymptotic freedom;
|𝑞2| → 0, 𝛼𝑠(|𝑞2|) → ∞: confinement.
1.3 Big Bang model
The Universe we can see nowadays, 12-16 billion years ago was concentrated in a very small region
with an infinite energy density and temperature. After the “explosion” (the creation of space and
time) the Universe became larger and colder during the years.
12
Figure 6: Big Bang theory
In the first microseconds of the universe life, the energy density was so high that the hadrons
(for example nucleons) couldn’t be formed. Hence, quarks and gluons were de-confined:
this is the state of Quark-Gluon Plasma (QGP)
When the energy density and the temperature went under their critical value (𝜖𝑐𝑟 ≈
1𝐺𝑒𝑉/𝑓𝑚3 - 𝑇𝑐𝑟 ≈ 170 𝑀𝑒𝑉 ≈ 2 ∙ 1012𝐾), the degrees of freedom related to the color
charge remained confined in objects without color (color-singlet) with a dimension of about
1 fm: this is the confinement of quarks and gluons with the formation of hadrons.
After three minutes from the Big Bang, the temperature dropped below ≈ 100 𝑘𝑒𝑉 (109 K)
and then small nuclei could be birthed and survived: this is the primordial nucleosynthesis.
Now the chemical composition of the primordial universe is fixed, this is called Chemical
freeze-out.
After the primordial nucleosynthesis the universe was still ionized and then opaque to the
electromagnetic radiation.
Approximately 300000 years after the Big Bang, when there was a T<3000 K, we have had
the formation of atoms (electrons + nuclei).
At this point the electromagnetic radiation decouples with a black body spectra at a T≈
3000 𝐾. This is the Thermal freeze-out. Because of the universe expansion, this black body
radiation has had a redshift until a temperature of 2.7 K (CMB-Cosmic Microwave
background).
After 600 million – 1 billion years the galaxies were formed.
Fig. 7 shows a summary of all the previous points.
Figure 7: Temperature vs. Time after Big Bang.
13
How can we observe QGP? The problem is that the universe is opaque to the electromagnetic
radiation for a time below 300000 years (decoupling of the radiation). In particular, the QGP is
“hidden” behind the “curtain” of the CMB. Moreover, a plasma of quarks and gluons at low
temperature with high baryon density could be present in the core of neutron stars.
Because of this problems, for the study of the QGP we have to recreate it in laboratory with the
collisions of heavy ions at high energy.
1.4 Transition phase diagram
The confinement property is not expected to be true in extreme conditions. Non-perturbative QCD
calculations predict that a baryon density larger than ~5 − 10 times the density of the ordinary
nuclear matter (𝜌0 = 0.15 𝑛𝑢𝑐𝑙𝑒𝑜𝑛𝑠/𝑓𝑚3) or at a temperature of the order of 140-200 MeV,
nuclear matter should undergo a phase transitions into the QGP state.
To represent this transition, we use two important quantities: temperature and Baryon-chemical
potential (𝜇𝐵). In statistical mechanics, the chemical potential is the minimal energy necessary to
add (or extract) a particle to a system: 𝜇 = 𝑑𝐸/𝑑𝑁. This potential is an estimate of the baryon-
number density.
In fig. 8, we can notice the transition diagram. Analyzing fig.8 (right), the red point on the x axis
corresponds to the ordinary nuclear matter (𝜇𝐵~1 𝐺𝑒𝑉), then after the compression and heating
(blue arrow), this matter can cross the phase boundary separating the hadron and the QGP phase.
Now, as in the early universe, thermalized quark matter starts to expand and cool, then moving
again across the boundary (green arrow). Quarks again recombine into a hadron gas which
continues to expand. The critical point (fig.8 left) (𝜇𝐵𝑐, 𝑇𝑐) separates the first order phase transition
from the crossover region.
Figure 8: Transition phase diagram
14
1.5 QGP predictions
There are theories and models in which we can find some predictions about QGP:
Perturbative QCD (pQCD), allows us to use a perturbative expansion in series of the strong
coupling constant 𝛼𝑠 with the requirement that 𝛼𝑠 ≪ 1. The processes that satisfy this
condition are those ones with large transferred momentum |𝑞2| (see paragraph 1.2), for
example heavy flavor production in hadron collisions.
Lattice QCD: is a complex non-perturbative theory of the QCD and it is based on a discrete
lattice of space-time coordinates which provides a quantitative understanding of the new
phase of matter. For example the calculations with this theory give a quite accurate estimate
of the critical temperature and the hadron masses. The drawback is that, to reach a small
pitch of the lattice, we need very high performance computer.
Effective models, these models are based on QCD and provide a phenomenological
description of the physical processes. For example there is the MIT bag model where quarks
are considered massless and confined into a bag of finite dimensions. This confinement
derives from the balance between the pressure due to the kinematic energy of the quarks and
the external pressure. If the internal pressure grows, we will arrive at a point when it
overcomes the external one and then the bag breaks. The pressure in the bag can increase for
two main reasons: the temperature becomes higher (kinematic energy associated to the
quarks) or the baryon density grows (compression). Experimentally we can induce this
condition in the nuclear matter with heavy ion collisions.
1.6 Heavy ion collisions
To explore the possible existence of QGP, it is important to create a strongly interacting system
which satisfies two main requirements:
Large spatial extension: we want to use macroscopic variables, and the system dimensions
must be greater than the scale of the strong interaction (~ 1 𝑓𝑚). Large extension means
also a big number of particles (≫ 1).
Long life: we want to use the language of Thermodynamic, and the system must reach the
thermal equilibrium (𝜏 ≫ 1 𝑓𝑚/𝑐).
Moreover we want to reach the correct energy density for the transition phase. If the critical
temperature is TC ~ 170 MeV, what is the value for the critical energy 𝜖𝑐? The expression for the
calculation of 𝜖𝑐 can be found in statistical mechanics and is:
𝜖𝑐 = 37 ∙𝜋2
30𝑇4 1
(ℏ𝑐)3 (3)
where 37 is a factor which contains the degrees of freedom of quarks and gluons. If we use 𝑇 =
𝑇𝑐 = 170 𝑀𝑒𝑉 and ℏ𝑐 = 197 𝑀𝑒𝑉 ∙ 𝑓𝑚, we find:
𝜖𝑐 ≅ 1𝐺𝑒𝑉
𝑓𝑚3 (4)
15
To obtain this energy density is not enough to make p-p or 𝑒+𝑒−collisions as we can see in fig. 9. In
fact in the graph on the left we can see that we have a few tens of particles even with an energy of
hundreds of GeV. The problem remains also at LHC with p-p collisions at very high energies
because the maximum multiplicity is under 100 particles (fig. 9 right). Hence, we need to make
collisions between ions (=systems with greater dimensions).
Figure 9: Particle multiplicity at different energies for p-p (higher energies on the right) and 𝑒+𝑒− collisions
So, the high energy collisions between heavy ions (Pb-Pb in ALICE or Au-Au at RHIC) can create
a system with the correct requirements described at the beginning of this paragraph. In particular
with this hot and dense system created, we want to study the phase transition.
The first experiments have been performed with fixed target configuration at the Alternating
Gradient Synchrotron (AGS) in Brookhaven and the Super Proton Synchrotron (SPS) at CERN,
with center of mass (CM) energies in the range between 2AGeV and 18 AGeV, where A is the
number of nucleons in the nucleus. Later experiments with colliding nuclear beams started at RHIC
in Brookhaven, taking advantage of the higher energy of 200 AGeV available in the CM frame; the
highest energy has been reached at the Large Hadron Collider (LHC) at CERN in 2010, this time
with a CM energy of 2760 AGeV. Now, in mid-2015, the p-p collisions at 13 TeV have been
performed for the first time at LHC. We will arrive at 5.5 TeV for Pb-Pb collisions in the next three
years.
The system created in a Pb-Pb collision can reach a volume of the order of 1000 fm3, consisting of
~ 1000 hadrons and, already at SPS energies, can reach an energy density of ~ 200 times larger
than in a nucleus.
16
1.6.1 Collision geometry
The Glauber model provides a phenomenological description of the nucleus-nucleus collision
starting from the geometrical configuration of the colliding nuclei. This model describes the
nucleus-nucleus interaction in terms of interactions between the constituent nucleons.
First of all the impact parameter has to be defined:
In p-A (proton-nucleus) collisions, the impact parameter is the vector in the transverse plane
xy (z is the axis of the collision) defined by the projectile and the target nucleus. So, it is the
distance of maximum approach between the proton and the target nucleus. See fig. 10 (left).
In A-A (nucleus-nucleus) collisions, the impact parameter is the vector defined by the
centers of the two colliding nuclei. See fig. 10 (right).
Figure 10: Impact parameter for p-A (left) and A-A (right) collision.
The impact parameter defines the centrality of the collision. In fact we can have two different
situations:
Central collision: collision with a small impact parameter. In this case there are many
nucleons involved in the interaction, many collisions between nucleons, a great interaction
volume and many particles produced in the final state.
Peripheral collision: collision with a big impact parameter. In this case there are a few
nucleons involved in the interaction, a few collisions between nucleons, a small interaction
volume and a small number of particles produced in the final state.
In both cases, we call participants the nucleons who have had at least one interaction with the
nucleons of the other nucleus and spectators the nucleons that have had no interaction, like we can
notice in fig. 11. The spectators proceed with little perturbation along the original direction.
Figure 11: Central (left) and peripheral (right) collisions.
17
Starting from this concepts, the Glauber model treats the collision between two nuclei like an
incoherent overlap of interactions between nucleons that form the nucleus. In this way, we can
describe the A-A collision with the theory of the probability and in this framework, the collision
between two nuclei is a sequence of independent events (=collisions between nucleons).
The model gives also the quantitative expressions to calculate:
The interaction probability
The number of elementary collisions N-N (N=nucleon) (𝑁𝑐𝑜𝑙𝑙)
The number of participants (𝑁𝑝𝑎𝑟𝑡). This nucleons are also called “Wounded nucleons”.
The number of spectators.
The dimensions of the overlap region of the two colliding nuclei.
Finally, it is important to distinguish between central and peripheral collisions because the energy
density released is maximal in a central collision. In this case, when the two nuclei collide, a large
volume of hot hadronic matter is created possibly fulfilling the conditions for the QGP formation.
1.6.2 Collision details and space-time evolution
Before the collision, the two interacting nuclei can be represented by two thin disks, given the effect
of the Lorentz contraction (step 1 of fig. 12).
Figure 12: The collision sequence
At a time t=0, the collision begins and all the energy is concentrated in the central region (step 2);
then, for times 𝑡 < 0.1 ÷ 0.3 𝑓𝑚/𝑐, we have the formation of the QGP (step 3) and finally the
expansion and the hadronization take place (step 4). From fig. 12, it’s clear that the process of
collision between two nuclei is an event with a complex space-time evolution.
In this context becomes important the energy density concept. The most used definition of energy
density is that provided by J.D. Bjorken (1982). In the reference system in which both the nuclei
have high energy, they can be seen like thin disks that are crossed quickly and the secondary
particles are generated in an initial volume with a limited longitudinal extension: this is essentially
the Bjorken condition. With simple calculations we can find the Bjorken energy density:
𝜖𝐵𝑗 =𝑚𝑡
𝜏𝑓𝐴
𝑑𝑁
𝑑𝑦|
𝑦=𝑦𝐶𝑀
=1
𝜏𝑓𝐴
𝑑𝐸𝑇(𝜏𝑓)
𝑑𝑦 (5)
18
where 𝜏𝑓 is the formation time, 𝐴 the number of nucleons, 𝐸𝑇 the transverse energy (coming from
the relativistic relation 𝐸2 = 𝑝2 + 𝑚2 = 𝑝𝐿2 + 𝑝𝑇
2 + 𝑚2 = 𝑝𝐿2 + 𝐸𝑇
2; where L stands for
“longitudinal” and T for “transverse”) and 𝑦 the pseudo-rapidity (see paragraph 1.6.3). The
expression (5) has a validity condition.
𝜏𝑓 ≫ 2𝑅/𝛾, where R is the nucleus radius and 𝛾 is the Lorentz factor.
Historically, the formation time was assumed to be 1 fm/c but this choice needs a justification
because at the SPS and AGS we have that 𝜏𝑓 > 1𝑓𝑚/𝑐. In general the formation time can be
defined using the indetermination principle where 𝜏𝑓 = ℏ/𝑚𝑇 and the transverse mass 𝑚𝑇 can be
found with measurements on the final states. For example at RHIC the formation time is 0.35 𝑓𝑚/
𝑐. The energy density reaches its peak of 15 GeV/fm3 at the formation time and then it decreases to
5.4 GeV/fm3 at the thermalization time like we can see in fig.13. The evolution after the
thermalization is model dependent.
Figure 13: Temporal evolution of the energy density at RHIC.
Figure 14: 𝑑𝐸𝑇/𝑑𝑦 for pairs of participants in function of the energy in the center of mass.
19
At LHC 𝑑𝐸𝑇/𝑑𝑦 is bigger by a factor 2.5 with respect to RHIC and we can find that the energy
density is 3 times the one of RHIC. We can’t predict the factor 𝑑𝐸𝑇/𝑑𝑦 at LHC with an
extrapolation of the RHIC data as we can notice in fig. 14. See also paragraph 1.6.3.
The collision process can be divided in a certain number of phases following fig.15.
Figure 15: Space-time evolution of a collision.
If the system is sufficiently interacting, it reaches the thermal equilibrium (thermalization) at a
proper time 𝜏0after the so called formation phase or pre-equilibrium. After that the system expands
rapidly and cools down experiencing characteristics stages during the evolution, which are:
QGP formation: is when the equilibrium is reached among the partonic constituents of the
system. This phase can be treated with the thermodynamic of equilibrium.
Hadronization: when partons fragment into colorless hadrons. This phase happens at a
temperature 𝑇 < 𝑇𝑐. We have a strong decrease of the entropy density, and there is an
increase of the volume at a constant temperature because the total entropy can’t diminish.
Gas of interacting hadrons
Chemical freeze-out: when the interactions occur at low energy and so, they can’t vary the
abundance of the different species of hadrons. The inelastic interactions among constituents
cease. Here the temperature of the system is 𝑇𝑐ℎ(𝑅𝐻𝐼𝐶 − 𝐿𝐻𝐶)~170𝑀𝑒𝑉 (see paragraph
1.6.4).
Thermal freeze-out: when the hadrons stop to interact with each other. The elastic
interactions among constituents cease. Here the temperature of the system is 𝑇𝑓𝑜(𝑅𝐻𝐼𝐶 −
𝐿𝐻𝐶)~110 − 130 𝑀𝑒𝑉 (see paragraph 1.6.4).
When even elastic collisions stop, hadrons stream freely away to be detected by the experiment.
The short life time of the QGP (only 10−23s), together with the impossibility to detect free quarks,
does not allow to measure the phase transition directly. For this reason, observables that can probe
the possible formation of the QGP are mainly indirect signals which should be able to test the
20
properties of the medium at different stages of the collision evolution. There are different types of
observables, listed below:
Hard (𝑝𝑇 ≳ 4 𝐺𝑒𝑉):
o Processes with high transferred momentum ; they are possible at the beginning of the
collision when the energy has not degraded yet.
o They are rare processes with a small cross section, their production rate is calculable
with the pQCD (heavy flavors, jet).
o They scale with the number of collisions (Glauber model).
o They are sensible to the successive phases of the collision.
Direct photons:
o They are irradiated by the plasma (both real and virtual photons that can be observed
like lepton pairs of opposite sign).
o They are early probes, but, since the photons background is high (successive phases
of the collision), their detection is very difficult.
Soft (𝑝𝑇 < 1 𝐺𝑒𝑉):
o They represent the major part of the observables (for example hadrons with light
quarks).
o They are produced in the last parts of a collision when the energy is highly degraded.
o In this case the coupling constant is big, hence the non-perturbative QCD has to be
used.
o The 99.5% of the hadrons produced is soft at RHIC.
o They scale with the number of participants (Wounded nucleon model).
1.6.3 Particle multiplicity
The main observables used to characterize the multiplicity of produced particle are the rapidity and
the pseudo-rapidity density distributions of primary charged particles. The pseudo-rapidity 𝜂 is
referred to the polar angle 𝜃 with respect to the beam axis with which a particle is emitted from the
interaction vertex. The pseudo-rapidity can be expressed as
𝜂 = − ln tan𝜃
2=
1
2ln
|𝑝|+𝑝𝑧
|𝑝|−𝑝𝑧 (6)
where p and pz are the total momentum and longitudinal momentum of the emitted particle
respectively. The rapidity y, instead, is given by:
𝑦 =1
2ln
𝐸+𝑝𝑧
𝐸−𝑝𝑧 (7)
where E is the total energy of the emitted particle.
Generally it is easier to measure 𝜂 than y since the pseudo-rapidity does not require particle
identification. At high energy we have 𝜂~𝑦 because 𝐸~|𝑝|.
In fig.16 we can see the dependence of (𝑑𝑁𝑐ℎ/𝑑𝜂)/(0.5 ∙ ⟨𝑁𝑝𝑎𝑟𝑡⟩) (the distribution of 𝑑𝑁/𝑑𝜂 for
pairs of participants) on the energy in the center of mass, obtained from the first ~3600 Pb-Pb
central collisions at √𝑠 = 2.76 𝑇𝑒𝑉 (ALICE LHC) . From the graph, it is possible to notice that the
21
trend of (𝑑𝑁𝑐ℎ/𝑑𝜂)/(0.5 ∙ ⟨𝑁𝑝𝑎𝑟𝑡⟩) is different for p-p and Pb-Pb collisions with a more growth for
Pb-Pb collisions. From the analysis of the data, the ALICE collaboration has found 𝑑𝑁𝑐ℎ
𝑑𝜂= 1584 ±
4(𝑠𝑡𝑎𝑡. ) ± 76(𝑠𝑦𝑠𝑡. ).
Figure 16: (𝑑𝑁𝑐ℎ/𝑑𝜂)/(0.5 ∙ ⟨𝑁𝑝𝑎𝑟𝑡⟩) vs. √𝑠𝑁𝑁 (ALICE collaboration).
Instead, in fig. 17 we see the dependence of (𝑑𝑁𝑐ℎ/𝑑𝜂)/(0.5 ∙ ⟨𝑁𝑝𝑎𝑟𝑡⟩) on the number of
participants for Pb-Pb collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉 (LHC) and Au-Au collisions at √𝑠𝑁𝑁 =
0.2 𝑇𝑒𝑉 (RHIC average). The charged-particle density per participant pair increases with ⟨𝑁𝑝𝑎𝑟𝑡⟩,
from 4.4 ± 0.4 for the most peripheral to 8.4 ± 0.3 for the most central class. We can also notice
that the centrality dependence of the multiplicity is found to be very similar for √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉
and √𝑠𝑁𝑁 = 0.2 𝑇𝑒𝑉.
Figure 17: Dependence of (𝑑𝑁𝑐ℎ/𝑑𝜂)/(0.5 ∙ ⟨𝑁𝑝𝑎𝑟𝑡⟩) on the number of participants for Pb-Pb
collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉 and Au-Au collisions at √𝑠𝑁𝑁 = 0.2 𝑇𝑒𝑉 (RHIC average). The
scale for the lower-energy data is shown on the right-hand side and differs from the scale for
the higher energy data on the left-hand side by a factor of 2.1. For the Pb-Pb data, uncorrelated
uncertainties are indicated by the error bars, while correlated uncertainties are shown as the
grey band. Statistical errors are negligible. The open circles show the values obtained for
centrality classes obtained by dividing the 0%-10% most central collisions into four, rather
than two classes. The values for non-single-diffractive (NSD) and inelastic pp collisions are
the results of interpolating between data at 2.36 and 7 TeV.
22
The rapidity distribution of these produced particles could then be used to estimate the initial energy
density in the central reaction zone through the Bjorken equation (equation 5). ALICE obtained for
Pb-Pb collisions, in the centrality range 0-5 %, 𝜖𝐵𝑗 ≅ 16 𝐺𝑒𝑉/𝑓𝑚3. We have already said that this
number is about a factor 3 larger than the corresponding one at RHIC in the same centrality range.
For both the estimations, the QGP formation time considered was 𝜏𝑓 = 1𝑓𝑚/𝑐. The energy density
measured at LHC and at RHIC is well above the critical density 𝜖𝑐~1 𝐺𝑒𝑉 expected for the phase
transition according to lattice QCD calculations.
Finally, in fig. 18 it is possible to see the multiplicity of the different species of particle identified.
The multiplicity of each species depends on the energy of the collision. For example, at high energy
(√𝑠 > 5𝐺𝑒𝑉) the pions represent the majority of the particles produced (they have the lowest mass
among the particles shown in the plot).
Figure 18: Multiplicity of the different species of particles as a function of √𝑠𝑁𝑁.
1.6.4 Particle spectra and radial flow
In heavy-ion collisions, most of the particles produced are hadrons generated in soft (non-
perturbative) processes.
The transverse mass distribution 𝑑𝑁/(𝑚𝑇𝑑𝑚𝑇) for low momentum particles has an exponential
trend:
𝑑𝑁
𝑚𝑇𝑑𝑚𝑇∝ 𝑒
−𝑚𝑇
𝑇𝑠𝑙𝑜𝑝𝑒 →𝑑𝑁
𝑑𝑚𝑇∝ 𝑚𝑇𝑒
−𝑚𝑇
𝑇𝑠𝑙𝑜𝑝𝑒 (8)
where 𝑇𝑠𝑙𝑜𝑝𝑒 is a fit coefficient. The 𝑑𝑁/(𝑚𝑇𝑑𝑚𝑇) spectrum is identical (same slope) for all the
hadrons produced in p-p collisions like we can see in fig.19. This is called mT-scaling. Because of
this, the 𝑇𝑠𝑙𝑜𝑝𝑒 coefficient assumes the same value of ~167𝑀𝑒𝑉 for all the particles. The
interpretation of this result is that these spectra are the Boltzmann’s thermal spectra and 𝑇𝑠𝑙𝑜𝑝𝑒
represents the emission temperature of the particles. In other words, 𝑇𝑠𝑙𝑜𝑝𝑒 is the system
temperature related to the thermal freeze-out phase.
23
The mT scaling is broken in A-A collisions and it is possible to see that the slope of the spectra
increases with the particle mass. The heaviest particles are shifted to higher values of 𝑝𝑇. In fig.20
we see that ⟨𝑝𝑇⟩ increases with the mass of the particle and this leads to an increased 𝑇𝑠𝑙𝑜𝑝𝑒 with the
mass of the particles.
Figure 20: ⟨𝑝𝑇⟩ vs the number of participants (PHENIX RHIC).
One can see that 𝑇𝑠𝑙𝑜𝑝𝑒 depends linearly on the mass. This leads to the interpretation that in nucleus-
nucleus collision, there is a collective motion of all particles superimposed on the thermal agitation
one in the transverse plane (xy) with a velocity 𝑣⊥ for which:
𝑇𝑠𝑙𝑜𝑝𝑒 = 𝑇𝑓𝑜 +1
2𝑚𝑣⊥
2 (9)
where fo stands for “freeze-out”.
This collective expansion in the transverse plane is called radial flow. In fig. 21 we can see a
graphical representation of this phenomenon. The collective motion is due to the high pressures
generated from the compression and heating of the nuclear matter. The flux velocity of each volume
element of the system is the sum of the velocity of all particles contained in it. We can also say that
Figure 19: mT scaling for pions, kaons and
antiprotons at STAR RHIC. On the x axis we
have the transverse kinematical energy
24
the collective flux is a correlation between the velocity of one volume element and its space-time
position.
Figure 21: Radial flow
The values found for 𝑇𝑓𝑜 and 𝛽⊥ = 𝑣⊥/𝑐 are summarized in table 1.
SPS (√𝒔 = 𝟏𝟕𝑮𝒆𝑽) Central coll. Au-Au RHIC (√𝒔 = 𝟐𝟎𝟎𝑮𝒆𝑽)
𝑇𝑓𝑜 ≈ 120 𝑀𝑒𝑉 𝑇𝑓𝑜 ≈ 110 ± 23 𝑀𝑒𝑉
𝛽⊥ = 0.5 𝛽⊥ = 0.7 ± 0.2
Table 1: 𝑇𝑓𝑜 and 𝛽⊥ results at SPS and RHIC
In general 𝑇𝑓𝑜 and 𝛽⊥ can be extracted from the distribution of transverse momentum spectra via
Blast wave fits. Instead, we have already mentioned that the temperature of the chemical freeze-out
is about 170 MeV at RHIC and LHC.
At LHC, the radial flow is stronger and its speed 𝛽⊥ is greater than 10% for central collisions with
respect to RHIC.
These values, obtained only fitting the data, have a theoretical foundation in Fluidodynamic.
1.6.5 Anisotropic transverse flow
Figure 22: Anisotropic transverse flow representation.
In non-central heavy ion collisions, an initially asymmetric overlap region is created (fig. 22). The
impact parameter generates a preferential direction in the transverse plane xy. The reaction plane
25
(green line) is defined by the impact parameter and the beam direction; Ψ𝑅𝑃 is the azimuthal angle
of the impact parameter vector in the transverse plane.
The anisotropic transverse flow is a correlation between the azimuthal angle (=atan(𝑝𝑦/𝑝𝑥)) of the
particles and the impact parameter (=reaction plane). It is generated when the particle momenta in
the final state depend on the local and the global physical conditions of an event. So, this kind of
flow is an unambiguous mark of a collective behavior.
In a macroscopic point of view, the pressure gradients (the forces that push the particles) in the
transverse plane are anisotropic (they depends on 𝜑) and they are greater in the x-z plane (fig. 22
left) with respect to the y direction. Hence, the azimuthal distribution of the particles detected will
be anisotropic.
Instead, in a microscopic point of view, the interactions between the particles produced can convert
this initial geometrical anisotropy into an anisotropy of the particle momenta that can be measured
experimentally.
We use the Fourier series for the azimuthal particle distribution relative to the reaction plane, where
the azimuthal angle of each particle is measured with respect to the reaction plane (angle 𝜑 − Ψ𝑅𝑃):
𝑑𝑁
𝑑(𝜑−Ψ𝑅𝑃)=
𝑁0
2𝜋(1 + 2𝑣1 cos(𝜑 − Ψ𝑅𝑃) + 2𝑣2 cos(2(𝜑 − Ψ𝑅𝑃)) + ⋯ ) (10)
where N0 is a normalized constant. The sin(𝜑 − Ψ𝑅𝑃) terms are not present because the particle
distributions are symmetric relative to Ψ𝑅𝑃. The 𝑣𝑛 terms describe the differences from an isotropic
distribution. From the properties of the Fourier series we can find:
𝑣𝑛 = ⟨cos [𝑛(𝜑 − Ψ𝑅𝑃)]⟩ (11)
According to the values of these coefficients we can say:
Directed flow: if 𝑣1 ≠ 0 (𝑣2 = 0) there is a difference in the number of particles emitted in
parallel (0°) and anti-parallel (180°) to the impact parameter. Then, there is a preferential
direction in the particles emission. In the picture below we can see the phenomenon.
Figure 23: Direct flow, the red arrow indicates the preferential direction.
26
Elliptic flow: if 𝑣2 ≠ 0 (𝑣1 = 0) there is a difference in the number of particles emitted in
parallel (0° and 180°) and perpendicularly (90° and 270°) to the impact parameter. This is
the expected effect of the difference between the pressure gradients in parallel and
orthogonal to the impact parameter. It represents the elliptic deformation of the particle
distribution in the transverse plane. In fig. 24 we can see the phenomenon, the two cases
“out of the plane” and “in plane” are referred to the value of 𝑣2; 𝑣2 < 0 and 𝑣2 > 0
respectively.
Figure 24: Elliptic flow, the black arrows indicate the preferential directions.
The elliptic flow is very important because:
The initial geometrical anisotropy is attenuated with the evolution of the system;
the pressure gradients causing the flow are stronger in the first moments after the collision;
so this type of flow is particularly sensitive to the equation of state of the system in the first
moments of the collision. In fact, in fig. 25 is possible to see that the geometrical anisotropy 𝜖𝑥
diminishes with the time and, on the other hand, the momenta anisotropy 𝜖𝑝 has a fast increase in
the first instants (𝜏 < 2 − 3 𝑓𝑚/𝑐) after the collision when the system is in the QGP state and then
it remains approximately constant during the transition phase (2 < 𝜏 < 5 𝑓𝑚/𝑐) and it has, in the
end, a little increase in the hadron gas phase (𝜏 > 5 𝑓𝑚/𝑐).
Figure 25: Geometrical (𝜖𝑥) and momenta (𝜖𝑝)
evolution after the collision
27
1.6.6 Elliptic flow at LHC
Figure 26: v2 vs pT at LHC.
In fig. 26 (left) we can see that the 𝑣2 parameter as a function of pT doesn’t change, within the
uncertainties, going from √𝑠𝑁𝑁 = 200𝐺𝑒𝑉 to 2.76 TeV. Instead, in fig.26 (right) one can see that
𝑣2 𝑣𝑠 𝑝𝑇 has a greater difference for pions, kaons and protons at LHC and RHIC. This is the result
coming from the bigger radial flow at LHC that “pushes” the protons to higher values of transverse
momentum.
Moreover, in fig. 26, the hydrodynamic predictions are very good for pions and kaons for semi-
peripheral (40-50%) and semi-central (10-20%) collisions, but not for anti-protons in the centrality
bin 10-20% because the radial flow of the data is greater than the one predicted with the model.
Finally, the elliptic flow is one of the things used in 2005 to claim that in the Au-Au collisions at
RHIC, a “strong interacting QGP” (sQGP) is formed. The fireball rapidly reaches the thermal
equilibrium (0.6-1 fm/c) and it behaves like a perfect liquid (mean free path << system dimensions
and the viscosity is about zero).
1.6.7 Jet quenching and ℛ𝐴𝐴
In hadronic collisions, hard parton scatterings occurring in the initial interaction produce cascades
of consecutive emissions of partons, called jets. The jets fragment in hadrons during the
hadronization phase. The final state is characterized by clusters of particles close in the phase space.
Their transverse momenta relative to the jet axis are small compared to the jet component along the
jet axis and this collimation increases with increasing jet energy.
When traversing the QGP, partons are expected to lose an amount of energy which is proportional
to the square of the in-medium path length causing therefore the so-called jet quenching effect when
QGP is produced.
Some of the consequences of the “jet quenching” effect are:
reduction of the high-pT particle yield;
dependence on the impact parameter of the collision: jet quenching is expected to be larger
for central collisions;
28
two back-to-back jets with high momentum are not likely to be reconstructed because jets
with a longer path in the nuclear medium become softer and thus are not found by the jet
reconstruction algorithm. See fig. 27.
Figure 27: Two back-to-back jets in the fireball.
The production of hard particles in nucleus-nucleus collisions is foreseen to scale with the number
of elementary nucleon-nucleon collisions. Hence, it is expected that the pT-spectra measured in
nucleus-nucleus collisions can be evaluated from those one of p-p collisions with this scaling low
(binary scaling):
(𝑑𝑁
𝑑𝑝𝑇)
𝐴𝐴
= 𝑁𝑐𝑜𝑙𝑙 ∙ (𝑑𝑁
𝑑𝑝𝑇)
𝑝𝑝
The phenomenon of jet quenching can be quantitatively estimated by measuring the nuclear
modification factor, ℛ𝐴𝐴 which is defined as:
ℛ𝐴𝐴 =1
𝑁𝑐𝑜𝑙𝑙
(𝑑𝑁
𝑑𝑝𝑇)
𝐴𝐴
(𝑑𝑁
𝑑𝑝𝑇)
𝑁𝑁
(12)
Where (dN/dpT)AA is the pT-spectrum of particle produced in the nucleus-nucleus collision,
(dN/dpT)NN is the pT-spectrum from the p-p collision and Ncoll is the mean number of binary
collisions between nucleons.
If the binary scaling is valid, we have:
ℛ𝐴𝐴 < 1: soft physics (low 𝑝𝑇);
ℛ𝐴𝐴 = 1 with high 𝑝𝑇 (hard processes);
Figure 28: ℛ𝐴𝐴 vs. 𝑝𝑇 without nuclear effects.
29
In fig. 28 one can see the trend of the ℛ𝐴𝐴 in the hypothesis of not having nuclear effects.
Nevertheless, in nucleus-nucleus collisions the binary scaling is broken because of the effects on the
initial and final states:
INITIAL STATE EFFECTS (p-A and A-A collisions)
o Cronin effect: discovered in the 70s in p-A collisions at Fermilab, is due to the fact
that before the hard scattering, partons could have several elastic scattering with
partons of the target nucleus. In this way partons achieve a pT distribution
proportional to the square root of the number of elastic collisions (random walk). See
fig. 29. In fact, when the hard process happens, the parton with the initial pT receives
an extra “kick” leading to a greater transverse momentum kT. If the pT increases, the
effects of this extra “kick” becomes smaller; so the Cronin effect disappears for
𝑝𝑇 → ∞.
Figure 29: Cronin effect explanation (left) and result (right)
o Modification of the PDF: PDFs (Parton Distribution Functions) inside the nuclei are
different from the PDFs calculated for free nucleons.
FINAL STATE EFFECTS (AA collisions)
o Energy loss – jet quenching: in hot and colored medium partons lose energy
interacting with the color field of the system, especially for radiative loss (at high
energy). This effect reduces the production of hard hadrons; this is a signal of the
possible formation of a new state of matter. See fig. 30.
Figure 30: Energy loss-jet quenching phenomenon. Orange rectangle is QGP
30
Fragmentation and coalescence: are two mechanisms of hadronization. The first one is
when a parton with high 𝑝𝑇 fragments into hadrons with lower 𝑝𝑇 = 𝑝𝐻 (𝑝𝐻 = 𝑧 ∙ 𝑝𝑞
with z<1). The second one happens when two or three partons with low 𝑝𝑇 recombine to
create an hadron with an higher 𝑝𝑇. See fig.31.
Figure 31: Examples of fragmentation (top) and coalescence (bottom) processes.
Passing to the experimental results, fig. 32 depicts the ℛ𝐴𝐴 for charged hadrons and 𝜋0. It is
possible to see that suppression is a factor 5 higher with respect to the trend seen in fig. 29 (right).
For 𝑝𝑇 > 4 𝐺𝑒𝑉/𝑐, this suppression can be due to a final state effect as the energy loss. Later
experiments were made to measure the ℛ𝐴𝐴 in d-Au collisions and for particles (photons) not
subjected to the strong interaction.
Figure 32: ℛ𝐴𝐴 vs. 𝑝𝑇 for charged hadrons and 𝜋0 at RHIC.
In the ℛ𝐴𝐴 measurements for d-Au collisions (ℛ𝑑𝐴𝑢) there isn’t the QGP formation and then we
don’t have effects due to the final state. We have, instead, the initial state effects. The results (fig.
33 left) indicate the expected Cronin enhancement and thus we can conclude that the effect of
suppression in fig. 32 is not due to the initial state.
Direct photons (without considering those from 𝜋0 and 𝜂 decays) are a “medium-blind probe” and
they scale with the number of collisions (hard process). For this photons, RHIC experiments (Au-
Au collisions, QGP formation) have measured (fig. 33 right) the expected trend for the ℛ𝐴𝐴 and so
it is possible to conclude that the quenching observed in fig. 32 is the consequence of a final state
effect.
31
Figure 33: RHIC ℛ𝐴𝐴 measurements for d-Au collisions (left) and for direct photons (right).
In fig. 34 (left) one can see the results of ℛ𝐴𝐴 measurements for charged hadrons at LHC at an
energy √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉 compared to the RHIC measurements at √𝑠𝑁𝑁 = 200 𝐺𝑒𝑉. In the
common 𝑝𝑇 region, the suppression observed at RHIC and LHC has a similar shape with a peak at
𝑝𝑇~2 𝐺𝑒𝑉. Anyway, the suppression measured at LHC is greater. The possible explanation for this
is that the medium produced at LHC is more dense (we can also notice the slopes of the spectra:
spectrum less steep = “harder” = bigger ℛ𝐴𝐴 at constant energy loss) and there are more particles
coming from the fragmentation of a gluon.
In the right side of fig. 34 we have the ℛ𝐴𝐴 measured at LHC for p-Pb collisions. This
measurement, compared with the one obtained for Pb-Pb collisions, confirms that the suppression
observed in Pb-Pb is due to final state effects because in p-Pb we observe the expected trend and
there isn’t the formation of the QGP state.
Figure 34: ℛ𝐴𝐴 at LHC for Pb-Pb (left) and p-Pb (right) collisions.
32
1.6.8 Hints on charmonium suppression
The charmonium is a bound state of a quark c and an antiquark 𝑐. This behaviour can be described
with the Cornell potential:
𝑉(𝑟) = −𝛼𝑠
𝑟+ 𝑘𝑟 (13)
where 𝛼𝑠 is the QCD coupling constant (good description with 𝛼𝑠 = 0.52) and k is a constant
(𝑘~1 𝐺𝑒𝑉/𝑓𝑚). The first term −𝛼𝑠/𝑟 is a Coulomb-like term and the second one is a confining
term.
If we “soak” a 𝑐𝑐 state in a QGP, we have two main effects (see fig. 35):
the confining term kr vanishes;
the presence of an high color density screens the Coulomb-like part of the potential;
the potential of the equation 13 changes into a Yukawa potential 𝑉(𝑟) = −𝛼𝑠
𝑟𝑒−𝑟/𝜆𝐷,
where 𝜆𝐷 is the Debye screening length.
Figure 35: 𝑐𝑐 in a QGP state.
The suppression of the different 𝑐𝑐 (and also 𝑏) bound states (𝐽/𝜓, 𝜓(2𝑆), 𝜒𝑐) is expected to
happen at different temperatures because this states have different binding energies (𝐽/𝜓 being the
most strongly bound). This phenomenon can be explained in a simple sketch as in fig. 36.
Experimentally, to observe this suppression it is sufficient to measure the most linked state because,
for example, the 𝜓(2𝑆) and 𝜒𝑐 resonances have a non-zero branching ratio towards the 𝐽/𝜓.
Figure 36: Chamonium and bottomonium expected suppressions vs. the temperature
33
The first measurements at the NA38 experiment (1986) with O-U collisions indicated that we have
a 𝐽/𝜓 suppression by a factor 2 from peripheral to central collisions. But, these measurements had
some problems related to the fact that it can exist other processes that can lead to the 𝐽/𝜓
suppression (not considered in 1986). Moreover, it is also important the choice of a correct process
as a reference that has to have a similar production mechanism to that of the 𝐽/𝜓, and it has not to
be sensible to the QGP phase. The Drell-Yan process was chosen as a reference (for example
𝑞 → 𝜇+𝜇− because the 𝜇 lepton is not sensible to the strong interaction).
The last results from LHC are depicted in fig. 37. We can notice that the suppression observed at
LHC is smaller than that seen at RHIC. This is a signal of the so-called statistical recombination of
𝑐𝑐 couples. This phenomenon is due to the fact that the 𝑐𝑐 pairs which are not combined during the
QGP phase, can do it during the hadronic phase because of recombination. This is possible because
there is a great multiplicity of heavy quarks after the QGP phase.
Figure 37: 𝐽/𝜓 suppression at LHC compared to the RHIC measurements.
34
Chapter 2: ALICE ITS Upgrade
In the introduction we have already briefly described the goals of the ALICE experiment.
The ALICE apparatus is quite complex, and from the center to the outer part of the detector we
find: ITS (Inner Tracking System) + MFT (Muon Forward Tracker), TPC (Time Projection
Chamber) with a radius of 5 meters, TRD (Transition Radiation Detector), TOF (Time Of Flight),
HMPID (High Momentum Particle Identification), EMCAL (ElectroMagnetic CALorimeter) and
the magnet which encloses all.
The entire structure allows for a comprehensive study of hadrons, electrons, muons, photons, and
jets produced in heavy-ion collisions. The Pb-Pb programme is accompanied by precision
measurements in p-p and p-Pb collisions to provide a quantitative base for comparisons with results
from Pb-Pb collisions.
In this chapter we concentrate our attention on the inner part of the experiment, the ITS. This part of
the detector is the nearest to the beam pipe, in the region with the highest particle density.
During the Long Shutdown 2 (LS2) of LHC in 2018/2019 the ALICE experiment plans the
installation of an upgraded experimental apparatus to readout all Pb-Pb interactions, accumulating
events corresponding to an integrated luminosity of 10 nb-1
. In summary, the detector upgrade
consists of the following sub-system upgrades:
Reduction of the beam pipe radius from 29.8 mm to 19.8 mm allowing the inner layer of the
central barrel silicon tracker being moved closer to the interaction point.
New high-resolution, high-granularity, low material budget silicon trackers:
o ITS covering mid-rapidity.
o Muon Forward Tracker (MFT) covering forward rapidity.
35
The wire chambers of the TPC will be replaced by GEM (Gas Electron Multiplier) detectors
and new electronics will be installed in order to allow for a continuous readout.
Upgrade of the forward trigger detectors and the Zero Degree Calorimeter (ZDC).
Upgrade of the readout electronics of the TRD, TOF detector, PHOS and Muon
Spectrometer for high rate operation.
Upgrade of online and offline systems (O2 project) in order to cope with the expected data
volume.
These plans have been presented in the ALICE Upgrade Letter of Intent, which was endorsed by the
LHCC in September 2012.
The detector performance and physics studies are based on Monte Carlo simulations that include the
transport of particles in a detailed model of the new detector, implemented using the GEANT
simulation package within the AliROOT framework.
This chapter gives an overview of the current ITS performance and limitations, and the design
objectives and layout of the new ITS.
2.1 Current ITS
The current ITS consists of six layers of silicon detectors placed coaxially around the beam pipe
(see fig. 38) with their radii ranging from 3.9 cm to 43 cm. The inner radius is the minimum
allowed by the radius of the beam pipe, while the outer radius is determined by the necessity to
match ITS tracks with those from the TPC. They cover a pseudo-rapidity range of |𝜂| < 0.9 for
vertices located within 𝑧 = ±60 𝑚𝑚 with respect to the nominal interaction point. The first layer
has a more extended pseudo-rapidity coverage (|𝜂| < 1.98) which, together with the Forward
Multiplicity Detectors (FMD), provides continuous coverage for the measurement of the charged
particle multiplicity.
To sustain a high particle hit density (the current system is designed for up to 100 particles per cm2
for Pb-Pb collisions at √𝑠𝑁𝑁 = 5.5 𝑇𝑒𝑉) and to perform an efficient vertex reconstruction, the first
two layers were made of Silicon Pixel Detectors (SPD) with state-of-the-art hybrid pixel detectors.
The two middle layers are made of Silicon Drift Detectors (SDD) followed by two layers of double
sided Silicon Strip Detectors (SSD).
The last four layers have analog readout with PID (Particle IDentification) capabilities through
𝑑𝐸/𝑑𝑥 measurement in the non-relativistic (1/𝛽2) region. All detector elements were carefully
optimized to minimize their radiation length, achieving 1.1 % 𝑋0(𝑆𝑖) (= 9.36 cm) per layer, the
lowest value among all the current LHC experiments. In fact, the average thickness of each silicon
detector is <350 μm.
In table 2 there are some specifications of the present ITS.
36
Figure 38: The current ITS layers configuration
Parameter Unit Silicon Pixel Silicon Drift Silicon Strip
Spatial precision rϕ μm 12 38 20
Spatial precision z μm 70 28 830
Two tracks
resolution rϕ-z
μm 100-600 200-600 300-2400
Cell size μm2 50x300 150x300 95x40000
Active area per
module
mm2 13,8x82 72,5x75,3 73x40
Total number of
modules
240 260 1770
Readout channels
per module
65536 2x256 2x768
Radius cm 3.9 and 7.6 15.0 and 23.9 38.0 and 43.0
Table 2: Main specifications of the present ALICE ITS
Here follows a brief description of the three different types of detector located in the ALICE ITS.
2.1.1 Silicon Pixel Detector
The SPD is based on a two dimensional matrix of reverse biased silicon diodes bump-bonded to
readout chips.
The sensor matrix includes 256x160 cells measuring 50 μm (𝑟𝜙) by 425 μm (z) with a thickness of
200 μm; each readout chip is connected to 256x32 detector cells and has a thickness of 150 μm.
Each pixel cell contains its own amplifier with leakage current compensation followed by a
discriminator.
A cooling system based on the evaporation of 𝐶4𝐹10 (decafluorobutane) is mounted in contact with
the detector in order to allow it to operate at room temperature.
Each of the 1200 front-end chips generates a Fast-OR signal when at least one of its pixel is hit by a
particle; this Fast-OR signal contributes to the Minimum Bias (MB) trigger.
37
Figure 39: Pixel-chip (left) and the bump-bond technique (right).
2.1.2 Silicon Drift Detector
The Silicon Drift Detectors (SDD) equip the two intermediate layers of the ITS, where the charged
particle density is expected to reach up to 7 𝑐𝑚−2.
They have a sensitive area of 70.17(𝑟𝜙) × 75.26(z) mm2 and a total area of 72.50 × 87.59 mm
2.
The sensitive area is split into two drift regions by the central cathode strip to which a maximum
voltage (HV) bias of -2.4 kV is applied. Now a voltage of -1.8 kV is used.
In each drift region and on both detector surfaces, 291 p+ cathode strips with 120 μm pitch fully
deplete the detector volume and generate a drift field parallel to the wafer surface.
The operating principle is based on the measurement of the time necessary for the electrons
produced by a ionizing crossing particle to drift from the generation point to the collecting anodes.
Generally 2-3 anodes (cluster) are hit because, due to Coulomb diffusion and repulsion, the
electrons cloud expands during the drift path.
The impact position of the crossing particle is determined in two dimensions:
o one coordinate is estimated using the electron drift time;
o the other is provided by the centroid position of the charge distribution collected by
the anodes.
Moreover, the total charge collected from the anodes is proportional to the energy deposited in the
detector by the crossing particle and this can be exploited for particle identification via 𝑑𝐸/𝑑𝑥 in
the non-relativistic region.
Figure 40: SDD (left) and the two coordinates (right) for the hit position determination.
38
2.1.3 Silicon Strip Detector
The two outer layers are fundamental for the matching of tracks from the TPC to the ITS; they
consist of double-sided Silicon Strip Detectors (SSD) mounted on carbon-fiber support structures.
The basic unit is the module, namely the sensor assembled with its readout front-end electronics,
which consists of two hybrids. Each module consists of a 1536-strip double-sided silicon sensor
connected through aluminum-kapton microcables to the front-end electronics.
A strip detector is an arrangement of strip-like shaped implants acting as charge collecting
electrodes. Placed on a low doped fully depleted silicon wafer, these implants form a one-
dimensional array of diodes. By connecting each of the metalized strips to a charge sensitive
amplifier a position sensitive detector is built. The strip pitch is 95 μm.
Two dimensional position measurements can be achieved by applying an addition strip orientated at
a certain angle on the wafer backside resulting in a double sided technology.
Figure 41: SSD module during the construction, after the assembling of the sensor with the electronics
2.2 Physics motivation for the ITS upgrade
The long-term physics goal of ALICE is to study and provide a characterization of the Quark Gluon
Plasma (QGP) state of matter. To achieve this goal, high statistics measurements are required, as
these will give access to the very rare physics channels needed to understand the dynamics of this
condensed phase of QCD.
The ALICE experimental strategy and the upgrade plans are discussed in the ALICE Upgrade
Letter of Intent and in the ITS Upgrade Technical Design Report. The two main open questions
concerning heavy-flavor interactions with the QGP medium are:
Thermalization and hadronization of heavy quarks in the medium, which can be studied by
measuring the heavy-flavor baryon/meson ratio, the strange/non-strange ratio for charm, the
azimuthal anisotropy 𝑣2 for charm and beauty mesons and the possible in-medium thermal
production of charm quarks.
In this respect, the new ITS will have a significant impact on the following measurements,
allowing for the first time to access, with sufficiently high statistics, some specific physics
channels, such as:
39
o D mesons, including 𝐷𝑠;
o Charm and beauty baryons, Λ𝑐 and Λ𝑏. The former will be measured, for the first
time, through the decay Λ𝑐 → 𝑝𝐾−𝜋+, the latter will be measured through the decay
Λ𝑏 → Λ𝑐 + 𝑋;
o Baryon/meson ratios for charm (Λ𝑐/𝐷) and beauty (Λ𝑏/𝐵);
o Study of the elliptic flow of charmed and beauty mesons and baryons down to low
𝑝𝑇.
Heavy-quark in-medium energy loss and its mass dependence, which can be addressed by
measuring the nuclear modification factor ℛ𝐴𝐴 of the 𝑝𝑇 distributions of D and B mesons
separately in a wide momentum range, as well as heavy flavor production associated with
jets.
The new ITS will significantly improve or make accessible for the first time the following
measurements in Pb-Pb collisions:
Measurements of beauty production via the decay channels 𝐵 → 𝐷 + 𝑋, 𝐷0 → 𝐾𝜋;
Measurements of the beauty production via displaced 𝐽/𝜓 → 𝑒𝑒;
Improve measurement of single displaced electrons;
Improve beauty decay vertex reconstruction, using any of the previous three channels plus
an addition track.
Moreover, the new ITS will also allow the measurement of low-mass di-electrons. This
measurement gives access to
Thermal radiation from the QGP, via real and virtual photons detected as di-electrons.
In-medium modifications of hadronic spectral functions related to chiral symmetry
restoration, in particular for the 𝜌 meson in its 𝑒+𝑒− decay mode.
The production measurement of hyper-nuclear states, like 𝐻 →3 𝐻𝑒 + 𝜋−Λ3 , will also largely benefit
from the improved tracking resolution and the high envisaged integrated luminosity.
In summary, the design goals that are instrumental for the physics program are:
1. Highly efficient tracking, both in association with the TPC and in standalone mode, over
and extended momentum range, with special emphasis on very low momenta.
2. Very precise reconstruction of secondary vertices from decaying charm and beauty hadrons.
2.3 Current ITS limitations
The precision of the present ITS in the determination of the track distance of closest approach is
adequate to study the production of charm mesons in exclusive decay channels (e.g. 𝐷0 → 𝐾𝜋 and
𝐷+ → 𝐾𝜋𝜋) at values of transverse momentum above 1 GeV/c. At lower transverse momenta,
however, the statistical significance of the measurement is insufficient for currently achievable
datasets. In fact, in fig. 42 we can see the limits on the spatial resolution in the case of the 𝐷0 →
𝐾−𝜋+ decay. At a transverse momentum of 500 Mev/c the resolution is about 110 − 120 𝜇𝑚; thus,
40
if we consider that the 𝐷0 meson has a 𝑐𝜏 = 123 𝜇𝑚 (see table 3), the statistical significance is
insufficient1.
Figure 42: Example of the spatial resolution for the decay 𝐷0 → 𝐾−𝜋+
Particle Decay Channel c (m)
D0 K
+ (3.8%) 123
D+ K
+
+ (9.5%) 312
𝑫𝒔+ K
+ K
+ (5.2%) 150
𝜦𝒄+ p K
+ (5.0%) 60
Table 3: Some decays with their 𝑐𝜏
But, the challenge is even greater for charm baryons. The most abundantly produced charm baryon
is the Λ𝑐 and it has a proper decay length (𝑐𝜏) of only 60 μm. This is lower than the impact
parameter resolution of the present ITS in the transverse momentum range of the majority of Λ𝑐
daughter particles (2 ÷ 6 𝐺𝑒𝑉/𝑐). Therefore, charm baryons are presently not measurable by
ALICE in central Pb-Pb collisions. For the same reasons as outlined above, the study of beauty
mesons, beauty baryons, and of hadrons with more than one heavy quark are also beyond the
capability of the current detector.
Another crucial limitation of the current ITS is given by its limited readout capabilities. The ITS
can run at a maximum rate of 1 kHz (with dead time close to 100%), irrespective of the detector
1 It is defined as
𝑠
√𝑠+𝑏 where s is the signal and b the background. With the improvement of the spatial
resolution, the value of s increases (more true event 𝐷0 → 𝐾−𝜋+ detected) with the reduction of the
background b.
41
occupancy. This limit is given by the SDD, but also the TPC has a similar limit on the readout rate.
For all physics channels that cannot be selected by a trigger, this rate limitation restricts ALICE to
use only a small fraction of the full Pb-Pb collision rate of 8 kHz that the LHC presently can deliver
and prevents the collection of required reference data in p-p collisions. Clearly, the present ITS is
inadequate to fulfill the required rate capabilities envisaged for the ALICE long-term plans
discussed in the previous section.
Finally, the impossibility to access the present ITS detector for maintenance and repair interventions
during the yearly LHC shutdowns represents a major limitation in sustaining high data quality.
Rapid accessibility to the detector is a key priority in the design of the upgraded ITS.
2.4 ITS upgrade overview
Regarding the LHC accelerator, the collision energy will increase reaching 5.5 TeV and 14 TeV for
Pb-Pb and p-p interactions respectively. Also the luminosity will grow up to 6 ∙ 1027 𝑐𝑚−2𝑠−1 for
Pb-Pb collisions. So, we have to prepare the new ITS (and the entire ALICE detector) for the
greater amount of data we will have after the LHC upgrade. In fact we will reach a readout rate of
up to 400 kHz and 50 kHz for p-p and Pb-Pb collisions respectively. An ITS upgrade program is
foreseen in order to improve the various limits discussed in the previous paragraph. Here follow the
key features of the ITS upgrade:
The impact parameter resolution has to be improved by a factor 3-5 in z and 𝑟𝜑 directions.
This can be reached with the following changes:
o Reduction of the material budget from 1.1 % 𝑋0(𝑆𝑖) to 0.3% 𝑋0(𝑆𝑖) and 1% 𝑋0(𝑆𝑖)
for the inner and outer layers respectively.
o Reduction of the pixel size from 50 × 425 𝜇𝑚2 to 28 × 28 𝜇𝑚2.
o Possibility to get closer to the vertex because the beam pipe radius will be reduced
from 29.8 mm to 19.8 mm. Hence, the inner layer will have a radius of 22 mm (now
is 39 mm).
Improvement of the tracking efficiency and 𝑝𝑇 resolution at low 𝑝𝑇. In order to do this,
these changes have been programmed:
o Introduction of a seventh layer of silicon detectors.
o Equip all the seven layers with silicon pixel detectors.
Fast yearly maintenance
The new detector will not measure the energy loss in the silicon layers because the chosen detectors
will have binary readout without information on the charge signal amplitude. It is assumed that all
measurements that are being performed with the current detector using the ITS PID (identified
charged hadron spectra, flow and correlations) will have been completed by the end of the LHC
second run, before the ALICE upgrade scheduled for LS2.
The solution for the layout of the ITS upgrade is to equip all the new seven layers with Monolithic
Active Pixel Sensors (MAPS) implemented using the 0.18 μm technology of TowerJazz. The basic
MAPS element is the Pixel Chip. It consists of a single silicon die of about 15 𝑚𝑚 × 30 𝑚𝑚,
which incorporates a high-resistivity silicon epitaxial layer (sensor active volume), a matrix of
charge collection diodes (pixels) with a pitch of the order of 30 μm, and front-end electronics to
42
perform signal amplification, digitization and zero-suppression. Only the information on whether or
not a particle was crossing a pixel is read out.
Figure 43: Simulated pointing resolution (left) and tracking efficiency (right) of the upgraded ITS
In fig. 43 we can see that the pointing resolution will improve by a factor 3 and 5 for 𝑝𝑇 ≅
500 𝑀𝑒𝑉/𝑐 in 𝑟𝜑 and z direction respectively. Furthermore, the tracking efficiency at low
momenta will significantly improve with the new ITS.
2.4.1 Detector layout overview
The new ITS, as we have said in the previous paragraph, will have seven layers of MAPS. These
layers will be organized in (see fig. 44)
Inner layers: the inner three layers (from 0 to 2).
Middle layers: the middle two layers (3, 4).
Outer layers: the outer two layers (5,6).
The inner layers form the so-called Inner Barrel (IB). The outer and middle layers, instead, form the
Outer Barrel (OB).
The new ITS coverage in pseudo-rapidity will be |𝜂| < 1.22; the radial coverage will be 22 mm
(inner layer) – 400 mm (outer layer). The total active area will be approximately 10 m2.
The ITS layers are azimuthally segmented in units named Staves, which are mechanically
independent. Staves are fixed to a support structure, half-wheel shaped, to form the Half-Layers.
(GeV/c)T
p
-110 1 10
m)
mP
oin
tin
g r
eso
lutio
n
(
0
50
100
150
200
250
300
350
400ALICE
Current ITS, Z (Pb-Pb data, 2011)
Upgraded ITS, Z
(Pb-Pb data, 2011)jCurrent ITS, r
jUpgraded ITS, r
(GeV/c)T
p
-110 1 10E
ffic
ien
cy (
%)
0
20
40
60
80
100
ALICE
Current ITS
Upgraded ITS
= 0.8%0
= 0.3%; OB: X/X0
IB: X/X
43
Figure 44: Layers organization of the new ITS
The term “Stave” will be used to refer to the complete detector element. It consists of the following
main components (see fig. 45):
Space Frame: truss-like lightweight mechanical support structure for the single stave based
on composite material (carbon fiber).
Cold Plate: carbon ply that embeds the cooling pipes.
Hybrid Integrated Circuit: assembly consisting of the polyimide flexible printed circuit
(FPC) on which the Pixel Chip and some passive components are bonded. The staves of the
Outer Barrel are further segmented longitudinally to Modules. Each module consists of a
Hybrid Integrated Circuit.
Half-Stave: the stave of the Outer Barrel is further segmented in azimuth in two halves,
named Half-Staves. Each Half-Stave consists of a number of modules glued on a common
cooling unit.
Figure 45: Total number of staves (left) and the stave organization (right) for Inner (IB) and Outer (OB)
barrels
IB
OB
44
2.4.2 Experimental conditions and running environment
Table 4 summarizes the expected maximum hit densities for primary and secondary charged
particles. An additional contribution to the overall particle load comes from 𝑒+𝑒− pairs generated in
the electromagnetic interaction of the crossing ion bunches. These will be referred to as QED
electrons.
PARTICLE FLUXES RADIATION DOSES
Layer Radius
(mm)
Prim. & sec. particlesa
(cm-2
)
QED
electronsb
(cm-2
)
NIELc
(1 MeV
neq/cm2)
TIDc (krad)
0 23 30.4 6.02 9.2 × 1012
700
1 32 20.4 3.49 6.0 × 1012
380
2 39 14.9 2.35 3.8 × 1012
216
3 196 1.0 2.1 × 10-2
5.4 × 1011
15
4 245 0.7 9.0 × 10-3
5.0 × 1011
10
5 344 0.3 1.3 × 10-3
4.8 × 1011
8
6 393 0.3 4.0 × 10-4
4.6 × 1011
6
The expected radiation doses and hadron fluences for the upgraded ITS detector are computed for
the following integrated luminosities, which correspond to the target statistics needed for the
proposed physics studies:
8 ∙ 1010 Pb-Pb inelastic collisions;
1 ∙ 1011 p-Pb inelastic collisions;
4 ∙ 1011 p-p inelastic collisions.
A conservative safety factor of ten is further applied to take into account uncertainties on the beam
background, possible beam losses, inefficiency in data taking and data quality requirements. The
expected radiation levels are summarized in table 4.
The upgraded ITS will be operated at room temperature (20°C to 30°C) using water cooling.
2.5 Pixel Chip
In this section we will speak about the pixel chip technology for the new ITS. We start with the
choice of the pixel chip technology, then there will be a description of this technology together with
Table 4: expected maximum hit densities and radiation levels. TID is the “Total Ionizing
Dose” and NIEL is “Non-Ionizing Energy Loss”. a maximum hit densities in central Pb-Pb collisions (including secondaries produced in
material). b for an integration time of 10 μs, an interaction rate of 50 kHz, a magnetic field of 0.2 T and
pT>0.3 MeV/c: a magnetic field of 0.2 T, which is planned for a run dedicated to the
measurement of low-mass di-electrons, corresponds to the worst case scenario in terms of
detector occupancy. c including a safety factor of 10
45
its functioning. In the end we will have a description of the two main pixel chip architectures
proposed and tested for the new ITS: MISTRAL and ALPIDE.
2.5.1 Choice of Pixel Chip technology
The upgraded ITS will have very thin sensors, very high granularity and will cover a fairly large
area. In the past decade there has been a lot of progress on MAPS, which can now be considered for
the construction of tracking systems in high-energy physics experiments. The ULTIMATE chip of
the STAR PXL detector at RHIC is the first successfully running, large-scale application. However,
further R&D is required to meet the much more stringent requirements of the ITS upgrade
compared to the STAR experiment in terms of integration time, power consumption and radiation
hardness.
2.5.2 Pixel Chip development
The sensors of upgraded ITS will be manufactured using the 0.18 μm CMOS Imaging Sensor
process by TowerJazz. This process provides up to six metal layers allowing for a high-density,
low-power circuitry. Furthermore, the gate oxide thickness of about 3 nm provides a sufficient TID
Figure 46: Schematic cross section of pixel of monolithic silicon pixel sensor (CMOS 0.18 μm).
radiation tolerance. This has already been confirmed in measurements on basic transistor structures.
The key feature of the process, however, is the special deep p-well. As shown in fig. 46, the n-wells
of PMOS transistors are housed in additional p-wells (deep p-well), preventing the transistor n-
wells from competing with the n-well of the collection electrode for charge collection. The deep p-
well is a sort of a screen for the electrons. Hence, full CMOS logic can be used within the matrix
and as a consequence, more complex in-pixel circuitry is possible. The presence of the deep p-well
is a unique feature of this process and can be the key to enable low-power readout architectures. In
standard implementations, an n-well is normally used as the substrate of PMOS transistors; as a
consequence, only NMOS transistors can be used in the pixel area.
An epitaxial layer with high resistivity serves as active volume. It is possible to produce this
epitaxial layer of up to 40 μm thickness and with a resistivity between 1𝑘Ω𝑐𝑚 and 6 𝑘Ω𝑐𝑚. With
such resistivity, a sizeable part of the epitaxial layer can be depleted. In order to increase the
depletion volume and to optimize the charge-collection efficiency, a moderate reverse substrate bias
can be applied. This is essential to increase the output signal of the collection n-well which is
46
proportional to ~𝑄/𝐶. It may also improve the resistance to non-ionizing irradiation effects. In
order to achieve a high signal, the charge collected by the central pixel needs to be increased.
Furthermore, the capacitance of the pixel needs to be minimized by shrinking the diode surface and
increasing the depletion volume which is supported by additional reverse substrate bias. Achieving
a good Q/C-ratio leads to an improved Signal-to-Noise Ratio (SNR) and as a consequence also to a
less power consuming design of circuitry.
2.5.3 Particle detection
As indicated in fig. 46, when a charged particle traverses the silicon sensor active volume, it
liberates charge carriers (electrons and holes) in the semiconductor material. The released charge is
then collected by electrodes that reveal not only the presence of a particle but also – due to a fine
segmentation – its impinging point onto the sensor. The nature and quantitative behavior of the
charge collection mechanism are functions of the material properties (resistivity or doping
level/profile) and geometry (thickness of sensitive material, pixel pitch, electrode shape) as well as
the electric field configuration (electrode potential and geometry) of the sensor. The amout of
deposited charge depends on the particle species and its momentum (Bethe-Bloch). Minimum
ionizing particles (MIPs, e.g. 0.5 GeV/c pions), which define the requirement on the minimal
detectable charge, typically release some 60 electrons per 1 μm path length in thin silicon layers.
Fig. 47 shows the depleted volume inside the pixel cell for different configurations.
Another important aspect of the detection circuitry is the noise originating mainly from the input
capacitance (kTC noise) and the small input transistor (in particular random telegraph signal noise,
RTS noise). kTC noise is created by resetting the collection electrode, i.e. by recharging the diode
capacitance. One way to mitigate this noise contribution in to measure the voltage signal on the
Figure 47: Semiconductor device simulations for different settings of total diode
reverse bias and epitaxial layer doping. The diode is made of a 3 μm × 3 μm
square n-well, which has a 0.5 μm spacing to the surrounding p-well. Shown in
one eight of the total pixel. The color code shows logarithmically the absolute
value of the electrical field, and the white line indicates boundaries of the
depletion region
47
diode twice and subtract the value of the first measurement from the second one (correlated double
sampling, CDS). The RTS noise is known to depend on the transistor geometries and type (NMOS
or PMOS). This kind of noise typically diminishes when increasing the size, which however also
increases the capacitance; some trade-off between gain and noise needs to be made. Additional so-
called shot noise is caused by the leakage of the collection node. Its magnitude is proportional to the
square root of the number of leaked electrons and hence does not only depend on the electrode
geometry but also on the integration time.
2.5.4 General requirements on the Pixel Chip
The physics objectives and the design goals outlined in paragraph 2.2 have led to the following
requirements for the Pixel Chip:
Silicon thickness: to minimize its contribution to the overall material budget of the ITS, it is
desirable to make the chip as thin as possible. The minimal thickness is determined by the
epitaxial layer height (nominal value is 18 μm) plus the height of the CMOS stack (around
10 μm). The fabrication of such a chip is done by thinning a standard-height wafer from the
back. To remain within a safety margin, a thickness of 50 μm is required.
Parameter Inner Barrel Outer Barrel
Chip dimensions 15 mm × 30 mm (r𝜑 × 𝑧)
Sensor thickness 50 μm
Spatial resolution 5 μm 10 μm
Detector efficiency >99%
Fake hit rate < 10-5
event-1
pixel-1
Integration time <30 μs
Power density <300 mWcm-2
<100 mWcm-2
Temperature 20°C to 30°C
TID radiation hardness 700 krad 10 krad
NIEL radiation hardness 1013 1𝑀𝑒𝑉 𝑛𝑒𝑞/𝑐𝑚2 3 × 1010 1𝑀𝑒𝑉 𝑛𝑒𝑞/𝑐𝑚2
Table 5: General requirements on the pixel chip
Intrinsic spatial resolution: the performance of the ITS upgrade and in particular its
capability to separate secondary vertices of heavy flavor decays is determined by the impact
parameter resolution (see paragraph 2.3 with the 𝐷0 meson example). This is a convolution
of the primary vertex resolution and the track pointing resolution and it is mainly determined
by the performance of the Inner Barrel. An intrinsic spatial resolution of 5 μm (10 μm) for
the Inner (Outer) Barrel is required.
Chip dimensions: The TowerJazz 0.18 μm CMOS technology allows for a maximum chip
length of 30 mm in z-direction. The limitation of the chip width to 15 mm was motivated by
geometrical considerations. For such width the deviation of the distance of each pixel from
the nominal radius of each layer, the number of azimuthal segments and the deviation from
an azimuthally vertical incidence angle are kept reasonably small. A chip size of 15 𝑚𝑚 ×
30 𝑚𝑚 has consequently been chosen as baseline chip dimension.
48
Maximum dead area: to assure a hermetic detector configuration, overlaps of the chips are
foreseen in rϕ to allow for placing digital circuitry at their boundaries. This leads to
localized increases of the material budget and thus needs to be minimized. In z-direction
there is no such overlap foreseen and the dead area has a more stringent requirement. The
performance simulations have been performed assuming a dead area of 2 mm in r𝜑- and 25
μm in z-direction.
Power density: the maximum tolerable material budget puts severe limitations on the
amount of material that can be used for power distribution and detector cooling. The power
density on the sensor has thus to be brought to a minimum and should not exceed 300
mWcm-2
for the Inner Layers and 100 mWcm-2
for the Outer Layers, in order to be
compatible with the material budget requirement of 0.3% X0 and 1% X0, respectively.
Integration time: it is the time needed to readout all the pixels of the matrix within a pixel
chip. It depends on the pixel architecture, but we will discuss this point later.
In order to cope with interaction rates of up to 50 kHz for Pb-Pb and up to 400 kHz for p-p
collisions, the maximum acceptable sensor integration time is about 30 μs, this value allows
to limit pile-up effects and a consequent loss of tracking efficiency.
Dead time at 50 kHz interaction rate: a dead time of 10 % at 50 kHz Pb-Pb interaction
rate can be tolerated. On-chip memories and bandwidths must be dimensioned such that they
can cope with the expected occupancy level.
Detection efficiency and fake hit rate: a detection efficiency of at least 99% and a fake hit
rate of not more than 10-5
per pixel and event are necessary to achieve the required track
reconstruction performance.
Radiation hardness: in order to ensure full functionality, especially for the ITS Inner
Layers, the pixel detectors will have to be tolerant against the radiation levels expected for
the innermost layer of 700 krad of TID and a fluence of 1013 1𝑀𝑒𝑉 𝑛𝑒𝑞/𝑐𝑚2 of NIEL,
including a safety factor of ten for a collected data set corresponding to 10 nb-1
Pb-Pb, 6 pb-1
p-p and 50 nb-1
p-Pb collisions.
The main requirements are summarized in table 5.
2.6 Pixel architectures
In the previous paragraph the pixel chip requirements have been described. The parameters in table
5 (from the first to the seventh) depend on the pixel architecture. The two main architectures
developed for the new ITS are called MISTRAL and ALPIDE; in the next paragraphs there will be
a general description of these two families, making in the end a comparison between their
characteristics.
The pixel chips belonging to the two families are always MAPS and, in the case of the ITS upgrade,
they represent an innovative technology for three main reasons:
1. They are produced with an industrial technology, so the cost is low.
2. They have an integrated electronics (monolithic), hence they are simple to mount saving
costs/risks due to the connections between the detector and the front-end electronics.
3. They have a fast readout (maximum bandwidth of 1.2 Gb/s)
49
2.6.1 MISTRAL: a modular design
The performance of CMOS Pixel Sensors (CPS) developed at IPHC (Strasbourg) in the 0.18 μm
feature size process was first evaluated with a series of prototypes called MIMOSA32 (Minimum
Ionizing MOS Active pixel sensor). The latter were extensively tested in the laboratory and at the
CERN-SPS, before and after irradiation with a combined irradiation load of 1 Mrad and 1013
neq/cm2. A detection efficiency of ~100% was measured at 30°C with nearly Minimum Ionizing
Particles (MIP), which did not significantly deteriorate after irradiation.
In parallel to the optimization of the pixel performance a readout architecture called MISTRAL
(MImosa Sensor Tracker Alice) is being developed for the ITS upgrade. It derives from the
architecture of the MIMOSA28 sensor designed for the STAR-PXL detector.
We divide the sensor into upstream and downstream components. The upstream circuit
encompasses with charge collection, in-pixel analogue processing and analogue to digital
conversion (1 bit ADC). The downstream circuit processes the digital signals, including data
compression, data transfer and chip steering. MISTRAL has the ADCs at a column level.
Figure 48: Block diagram of MISTRAL
Thanks to the small feature size of the 0.18 μm process, it is possible to implement this
functionality in small pixels that provide ~ 5 𝜇𝑚 spatial resolution only.
Fig. 48 shows the block diagram of the MISTRAL sensor proposed for the three inner layers of the
ALICE-ITS. The sensor is composed of three modular and independent parts called Full Scale
Building Blocks (FSBB_M where M stands for MISTRAL). Each block consists of the union of the
upstream and downstream sections introduced above. At the periphery of the chip we can see the
part dedicated to the Zero Suppression logic (SUZE) that makes the compression of the data coming
from all the pixels. Instead, the Low-Voltage Differential Signaling (LVDS) serializes the data and
then they are sent out of the chip.
2.6.2 MISTRAL: readout mode
In fig. 49 it is possible to see a recent version of a pixel chip of the MISTRAL family, called
MISTRAL-O (O means Outer). This chip ha a pixel pitch of 36 × 64 𝜇𝑚2 and a power
consumption of 97 mW/cm2 (it is foreseen to reduce it to 73 mW/cm
2). The periphery of the chip,
that is a dead area regarding the particle detection, has a dimension of 1.7 𝑚𝑚 × 30 𝑚𝑚.
50
Figure 49: Mistral-O pixel matrix (left) and its structure (right)
This architecture uses a rolling shutter readout mode, with two rows processed simultaneously to
double the sensor readout speed. Practically all the pixels of the matrix are readout (after the
amplification of their signal) and then their signals arrive to the chip periphery where the large
amount of data are discriminated and then compressed by the zero-suppression algorithm (clearly
all the zeros = “pixels not hit” are not interesting). There is one discriminator per pixel column in
the chip periphery. The time needed to readout all the matrix is approximately 20 μs. Then, the data
are serialized and sent out of the chip.
2.6.3 ALPIDE: general approach
The ALPIDE (ALice PIxel DEtector) development is motivated by the benefits of reducing
integration time and power consumption well beyond the ALICE specifications. As an alternative to
the traditional rolling shutter architecture, the ALPIDE development investigates the use of a low-
power in-pixel binary front-end in combination with a hit-driven readout. The binary front-end is
non-linear with a shaping time around 1-2 μs.
It is possible to send a trigger signal to the chip within that time after the event took place to allow
hit pixels to write a 1 into their memory. After this, hits can be read out sequentially by means of a
priority encoder circuit, which provides the address of the first hit pixel in a sector, and sequentially
resets it, so that during the next clock cycle the address of the next hit pixel is made available. This
continues until all hits in the sector have been read out. Digital power is minimized as power is
consumed only if hits are present. The encoder is fully combinatorial and is structured as a tree
where an element at one level implements the priority for four cells of the level below. A specific
level therefore corresponds to two bits in the address, with the lowest level implementing the
priority for four pixels and corresponding to the two least significant bits.
Fig. 50 shows schematically how a full scale ALPIDE chip would be built: pixels are arranged in
double columns, with a priority encoder per double column. This allow space to be used more
efficiently for the priority encoder as it is routing dominated and several routing lines can be shared
between the two columns.
51
Figure 50: The structure of the ALPIDE chip (left) and a pixel matrix enlargement (right).
The pixel front-end includes a storage element for hit information. At the periphery the chip is
equipped with cluster information compression circuitry and a multi-event memory. The final chip
also includes a serializer to send the data off-chip.
2.6.4 Hints on Priority Encoder
Active Pixel Sensor used in High Energy Particle Physics require low-power consumption to reduce
the detector material budget, low integration time to reduce the possibilities of pile-up and fast
readout to improve the detector data capability. To satisfy this requirements, an Address-Encoder
and Reset-Decoder (AERD) asynchronous circuit for a fast readout of pixel matrix has been
developed. The AERD data-driven readout architecture operates the address encoding and reset
decoding based on an arbitration tree, and allows to readout only the hit pixels. Compared to the
traditional rolling shutter scheme in MAPS, AERD can achieve a low readout time and a low power
consumption especially for low hit occupancies.
The pALPIDEfs (full scale prototype of ALPIDE) for the ITS upgrade has been implemented with
pixels that consist of a low power (~40𝑛𝑊) binary fron-end and a data-driven readout circuit.
During the prototype development, many different parameters of the architecture were analyzed and
simulated, in order to find the best compromise between the S/N, detection efficiency, power and
area constraints. A pixel pitch of 28 μm has been chosen and the chip size is 15 𝑚𝑚 × 30 𝑚𝑚
consisting 512 rows by 1024 columns, as shown in fig. 50 (left). The front-end works like an
analogue memory: it will generate a pulse with a shaping time of about 4 μs. If a strobe is applied
during this time after a hit arrives at the pixel, this hit data will be latched in the in-pixel state
register as a STATE signal. The AERD circuit, which reads ad resets the hit pixels, is arranged in
double columns inside the matrix. The design takes advantage of a gating technique to reduce
power: SYNC is a gated clock signal propagated from the chip periphery, used to select the highest
priority pixel to be read and reset. A VALID signal is a flag which is activated when there is at least
52
one hit. A FAST OR gate chain is used to generate the VALID signal and propagate it to the chip
periphery. The reset decoder fed by the output of the priority encoder (priority logic) and the SYNC
signal from the higher level to generate the SYNC signal for the lower level. That is then used to
select the pixel with the highest priority to be read and reset. See fig. 51.
Figure 51: The structure of the AERD basic logic block
For 1024 pixels to encode (number of pixels in a double column), the number of hierarchical levels,
basic logic blocks and routing channels are 5, 341 and 16 respectively.
The block diagram of the tree structure to decode only 16 pixels is shown in fig. 52 as an example.
It is possible to see that we have only two hierarchical levels (the first column of four blocks and
the second column of one block). Each block is like the one in fig. 51. The VALID signal
propagates from the lower hierarchical level of the arbiter tree to the top. If pixel 4 is hit, through
the FAST OR chain, VALID will be active. During the readout phase, while the VALID signal is
active, a synchronous signal (SYNC) propagates back into the hit pixel to read its address.
Combined with priority logic, at the lowest hierarchical level of the tree, the SYNC signal resets
only the pixel with the highest priority during the same clock cycle after the address of that pixel
has been read. During the propagation of the SYNC signal, also the ADDRESS[3:0] of the pixel
being reset propagates down to the End of Column (EoC). The ADDRESS lines are managed by
three-state buffers which are enable when SYNC is high, and in high impedance when SYNC is
low. Therefore, to ensure that the value of the ADDRESS is available at the EoC, half a SYNC
cycle is set aside for the propagation of the ADDRESS signals. On the falling edge of the SYNC,
the state register of the read pixel is reset and, when SYNC is low, the new configuration of VALID
and internal signals propagate such that the next pixel is going to be read in the subsequent SYNC
cycle.
53
2.6.5 ALPIDE: readout mode summary and parameters
As we have said in the previous paragraphs, the pALPIDEfs chip uses the priority encoder logic to
readout the pixel matrix. The chip is organized with a matrix of 512 × 1024 pixels and a periphery
(dead area) of 1.1 𝑚𝑚 × 30 𝑚𝑚. The chip dimension is always 15 𝑚𝑚 × 30 𝑚𝑚 (fig. 53 right).
Figure 53: pALPIDE pixel matrix (left) and its structure (right).
The pixel matrix is organized in double-columns and each of them shares a priority encoder (fig. 53
left). One double-column corresponds to 1024 pixels. Each pixel has a size of about 28𝜇𝑚 ×
28𝜇𝑚. When a particle crosses the chip, in some pixels we will have an analogue signal produced
by the collected charge; this signal is amplified and discriminated at a pixel level and then, if the
logic output of the comparator is 1 (signal over threshold), the pixel register this 1 in its memory (if
Figure 52: Two hierarchical levels of
AERD to decode 16 pixels
54
this 1 is in coincidence with a strobe window, see fig.54). At this point, the priority encoder read
only the pixels hit and the digital signal is transmitted to the chip periphery. The matrix is organized
in 32 regions (each region is a matrix of 512 × 32 pixels) and they are readout in parallel (see fig.
54). At the periphery, all the signals of the pixels read are serialized and transmitted out of the chip
(LVDS logic).
Clearly, this way of read the matrix is faster with respect to the MISTRAL one; in fact the
integration time is only 2 𝜇s. The discrimination of the signal at a pixel level brings to a power
consumption of only 39 mW/cm2 (in the worst case). Less power consumption means less heat
dissipation and so, a simpler cooling plant.
Figure 54: Readout organization of the chip with details on the pixel cell.
Different versions of the ALPIDE: from 1 to 4. The versions 1 and 2 are now available, instead
version 3 will be delivered in the end of September 2015. The version 4 is being finalized and will
be available in December 2015. The v1 will be accurately described in chapter 4. V2 is an
intermediate version still not containing the High Speed Data Transmission Unit (DTU) which has
been implemented in the V3, and it allows full integration in IB and OB Module. The v3 will
contain all final elements. The v4 will have a lower power consumption, especially in the serializer
and LVDS driver blocks.
Concerning the power consumption, in fig. 55 we can see the power consumption of pALPIDE-3
for each electronics block. It is clear that the digital periphery gives the greatest contribution to the
power consumption. Also the contributions of the serializer, the LVDS logic and the PLL are not
negligible.
The in-pixel architecture is complex and we will give a brief description in chapter 4. For the next
paragraphs, it is important to know that a pixel has an analogue and digital circuitry and that they
have independent power supply. This is important because we will mention the analogue and digital
voltages in the following.
55
Figure 55: ALPIDE-3 power consumption scheme.
2.6.6 ALPIDE: beam test measurements
In this section we present a brief description of the measurements obtained for the pALPIDE-1 at
the PS test beam (September and December 2014). For the test, a 𝜋− beam of 5-7 GeV was used at
the PS measuring the efficiency, the fake hit rate and the spatial resolution. The results are depicted
in fig. 56.
Figure 56: Efficiency, fake hit rate and spatial resolution measurements for pALPIDE-1 (CERN PS).
In the figure above, in the graph on the left, we can see the efficiency and fake hit rate vs. the
threshold current for 50 μm thick chips: three non-irradiated and three irradiated with
1013 1𝑀𝑒𝑉 𝑛𝑒𝑞/𝑐𝑚2. We can notice that 𝜆𝑓𝑎𝑘𝑒 ≪ 10−5 /event/pixel in the case of an efficiency
𝜖𝑑𝑒𝑡𝑒𝑐 > 99%. Instead, in the graph on the right, we can appreciate the good spatial resolution,
approximately between 4.5 and 5.5, in function of the threshold current for 50 μm thick chips: non-
irradiated, irradiated with 0.25 ∙ 1013 1𝑀𝑒𝑉 𝑛𝑒𝑞/𝑐𝑚2 and 1013 1𝑀𝑒𝑉 𝑛𝑒𝑞/𝑐𝑚2.
These results leads to a very large margin of operation over the design requirements.
56
2.6.7 MISTRAL & ALPIDE: summary and comparison
In the previous sections we have described the MISTRAL and ALPIDE architectures. In table 6
below there is a summary and a comparison of their characteristics.
Parameter ALPIDE MISTRAL-O Pixel pitch 28 × 28 𝜇𝑚2 36 × 64 𝜇𝑚2
Integration time < 2 μs ~20 μs
Power consumption 39 mW/cm2 97 mW/cm
2 (might further reduce to 73 mW/cm
2)
Dead area 1.1 × 30 𝑚𝑚2 1.7 × 30 𝑚𝑚2
Readout mode Priority encoder Rolling shutter
Discriminator position Pixel level Column level
Table 6: ALPIDE and MISTRAL main parameters.
As we can see, the ALPIDE characteristics are better especially for integration time and power
consumption. For these reasons, the chips of the ALPIDE family will probably be chosen for the
new ITS.
2.7 Modules and Staves in the OB and IB
The pixel chip is the basic structure of the IB and OB stave and in this section we will describe how
they are assembled to build modules and staves.
First of all we need to take into account the fact that we can access to the new ITS only from one
side because in the other side the Muon Forward Tracker (MFT) is mounted. This implies that
commands and data come and go from the same side.
Chips in the IB are organized in staves composed by 9 independent sensors for a total length of
about 27 cm. Each chip can drive its own data, clock and control lines (all masters). See fig. 57 for
the layout.
In the OB, an half-stave is composed by seven (Outer Layers) and four (Middle Layers) modules of
14 chips each for a total length of about 150 cm and 80 cm respectively. In a module, the two rows
of sensors are independent. In this case we have a master/slave structure with two symmetric groups
of one master and six slaves. Only the master can access the data, control and clock lines. An entire
stave of the OB is made of two half staves (see fig. 58).
Due to the fact that we can access to the new ITS only from one side, the clock, control and data
lines can get in and out the ITS only from one side. For example, in fig. 57 and 58, the left side is
not accessible and, in fact, the lines get in and out only from the right side.
Figure 57: IB stave organization. All the nine chips are masters.
57
Figure 58: OB stave organization. Master chips are yellow (top). Entire stave configuration for the middle
(in the middle) and outer (at the bottom) layers.
The data read from the chip have to be transmitted out of it; the data rate with which the data are
transmitted is:
1.2 Gb/s for the inner layers;
400 Mb/s for the outer layers.
A Flex Printed Circuit (FPC) has been designed in order to transmit the data out of the chips. It
consists in a flexible circuit connected directly to the chips via laser soldering resulting in a compact
assembly. In the next paragraph, there will be a description of this circuit.
2.8 Flex Printed Circuit
The Flex Printed Circuit (FPC) characteristics are described in this section. This circuit is connected
directly to the chip and it allows to:
transmit a clock and control signals from the end-barrel to the chips;
power the chips;
transmit the data out of the chips;
The FPC is a little bit different for the OB and IB. This is due to the different organization of the
chips in these two barrels and to the material budget, but the principle of operation is the same.
58
2.8.1 FPC for Inner Barrel Stave
The layout of the FPC of the IB Module is shown in fig. 59. The nine Pixel Chips in a Module are
read out in parallel: each chip sends its data stream to the end of Stave by a dedicated differential
pair. Two additional differential pairs distribute the clock and control signals to the nine chips.
In order to minimize the mechanical stress of the 50μm-thin chips, polyimide PI2611 having a
Coefficient of Thermal Expansion (CTE) of 5 ppm/K is considered for the FPC fabrication in
alternative to standard material (having CTE of 21 ppm/K). For comparison the CTEs of Si and C
are 3 ppm/K and 1.18 ppm/K, respectively.
Figure 59: Schematic view of the stack-up (top) and layout (bottom) of the FPC for the IB Stave.
The choice of the material to be used for the metal layers of the FPC is dictated by the need to
minimize the material budget, thus Al has been preferred to the standard Cu (the respective
radiation lengths being 8.9 cm and 1.44 cm). Along the same line, the total thickness of the
polyimide is 75 μm (and 40 μm for the Solder Mask) and the hole, called vias, diameters have been
fixed at 220 ± 10 𝜇𝑚 to reduce the needed volume of SnAg for the soldering and the thermal stress
to melt a larger amount of material, as explained in section 2.8.3. For practical reason the metal ring
surrounding the FPC holes has a width of 150 μm, resulting from a compromise between a
reasonable minimum pitch of 600 μm between holes and the needed pad diameter to compensate
the tolerance in hole drilling during the various phases of the production.
The thickness of each Al layer is 25 μm, a value which is mainly dictated by the need of minimizing
the voltage drop along the power supply lines and ensuring a differential impedance of 100 Ω in the
signal lines.
In fig. 59 we can also see the lines dedicated to the analogue (AVDD) and digital (DVDD) voltages
and the respective grounds (AGND, DGND). In fact we have already said that a chip has a digital
and analogue circuitry and these are the lines to feed these two chip circuits. We will describe them
in chapter 4.
This FPC for the Inner Layers has a length and width of about 30 cm and 15 mm respectively (it
covers 9 chips). The total thickness is approximately 150 μm.
59
The baseline procedure for the FPC production consists in Al sputtering under vacuum over both
faces of a polyimide foil with drilled holes. The I/O and power supply lines are engraved either by
chemical etching or by laser etching and, finally, a coverlay (solder mask) is glued on both faces by
lamination and vias are then opened.
2.8.2 FPC for the Outer Barrel Stave
The FPC for the OB Module interconnects 14 Pixel Chips arranged in two rows: each row of seven
chips is treated as an independent array in terms of power distribution and is driven by a master chip
located at the end for the bi-directional data exchange (see paragraph 2.7). All master chips on one
Half-Stave long row, i.e. four or seven depending on the layer, receive the clock and control signals
from the end of the Stave on a common differential pair and, after regeneration inside the chips
themselves, distribute them to the remaining six chips in a Module row.
Figure 60: Schematic view of the cross-section (top) and layout (bottom, version-0 July 2014) of the FPC
for the OB Stave.
The clock is connected to the master via multi-drop lines and data are sent on an internal bus. Each
module of the ITS sends data directly to the end of stave where a readout link is located; each Half-
Stave is served by two readout links. The proposed scheme, which provides a reasonable level of
redundancy against single chip failures or of an entire Module, has been implemented in the FPC
layout shown in fig. 60.
From fig. 60 we can see the differential lines that are made in Cu (brown background, at the bottom
of fig. 60). We chose to use Cu (instead of Al) for the OB to reduce the costs. It is also possible to
see the lines dedicated to the analog (AVDD) and digital (DVDD) voltages (orange background)
and their respective grounds. We have ~70 pads on each chip resulting in ~70 vias per chip on the
FPC. This FPC has a length and width of 21.6 cm and 33 mm respectively; and its thickness is only
150 μm.
The thickness of each Cu layer is 17 μm, a value which is mainly dictated by the need of
minimizing the voltage drop along the power supply lines and ensuring a differential impedance of
100 Ω in the signal lines.
Taking into account the estimated power consumption, the voltage drop along a row of seven chips
is dealt with by the power planes of the FPC; however, the length of the OB Half-Staves requires
60
the use of an additional Power Bus (PB) to limit the overall voltage drop to 100 mV, with respect to
the nominal value of 1.8 V, which can guarantee the full functionality of the Pixel Chip. A brief
description of the PB can be found in paragraph 2.9.
2.8.3 Pixel Chip to FPC connection
FPC is connected to the chips by means of laser soldering for power supply and I/O connections. A
via is a way to connect one level of a circuit (FPC) with another level (chip). They are metalized
holes in the FPC with a diameter of 220 ± 10 𝜇𝑚.
The Pixel Chips are connected to the FPC for power supply and I/O connections.
For the upgrade of the ALICE ITS a dedicated study was started to find a new connection scheme
for monolithic silicon pixel detectors. The main requirements are:
Compact Module layout, with a minimum of dead area in the chip periphery.
High quality, low inductance electrical connection.
Improved power connection scheme over the full chip surface.
Highly robust and mechanically stable connection technique.
The main technique that is being pursued and evaluated for the connection between the FPC and the
Pixel Chips is the laser soldering.
Laser soldering is an industrial application that will be used to connect the chip pad with the
corresponding metal coated hole in the FPC, using a solder ball which is melted locally by a laser
beam, as schematically shown in fig. 61.
Figure 61: Schematic view of laser soldering technique.
This avoids thermal stress on the full Module structures as the heat is only generated in the small
local area of the size of the connection pad. The optimization study presently carried out aims to
establish a procedure that ensures a high precision positioning of the chip and a good flatness of the
Module structure after soldering, as well as high quality and reliable electrical connections.
The laser spot size can be optimized for any solder ball diameter (down to 100 μm) and the region
to be heated can be precisely limited within the hole edges (≈ 400𝜇𝑚 for a 200𝜇𝑚 hole). As a
result, thermal stresses are minimized and chips and FPC integrity are ensured.
61
Laser soldering requires high accuracy in the alignment of the components and the tooling
necessary to place the solder micro-balls. In addition, there are constraints for chip positioning as an
overall accuracy of 100 μm or better is aimed at. Various mechanical supports have been designed
and manufactured in order to achieve the required precision. In fig. 62 one can see an example of
these mechanical supports located in Bari, one of the module production sites. As we can see, the
chips are positioned on a special base, and then a vacuum pump is activated; the FPC is aligned on
top of them and later a frame grid and a quartz window are positioned on top of them. Now the laser
beam is lit and the soldering procedure begins.
Figure 62: Laser soldering procedure (Bari, Italy).
Fig. 63 shows on the left hand side a top view picture of a melted ball after soldering and on the
right hand side the metallurgical cross section analysis of a soldered connection after embedding the
Module in resin, cutting and polishing. The good wetting of the solder on the chip pad and on the
FPC is clearly visible.
Figure 63: Pictures of solder contacts of test samples using 50 μm thick chips and single chip FPCs. The
cross section view demonstrates the good wetting of the contact surfaces.
Once the soldering procedure is over, in fig. 64 there is the final result for a Module of the OB with
the 14 chips soldered to the FPC (chip side).
62
Figure 64: Pixel chip side after the laser soldering procedure for an OB Module.
2.9 Power Bus We need to power the chips with adequate supply both analog and digital. The nominal values for
these voltages are 1.8 V.
The PB is a multilayer Al-polyimide bus, which brings analogue and digital power to all FPCs of a
Half-Stave (OB). The PB runs over the FPCs and is connected to them by tin soldering. The
maximum overall thickness of the aluminum layers of the PB is 200 μm. The limit is due to the
maximum material budget allowed. In fig. 65 we can see the pads on FPC and the corresponding
section of PB.
Figure 65: Pads for the PB connection on the FPC (top) and the PB layout (bottom).
2.10 Stave configuration summary
Here follows a summary of the stave configuration for the different layers of the new ITS:
INNER LAYERS (fig. 66 left)
o The three inner layers
o One stave corresponds to 9 chips, all masters. Total length ~ 30 cm.
o One FPC covers 9 chips and it transports both the data and power (no PB in this
case). Digital and analogue voltage: 1.8 V.
o 48 staves in total.
o Data rate on the FPC data lines: 1.2 Gb/s.
MIDDLE LAYERS
o The two middle layers
o One module corresponds to 14 chips (two independent rows of 7 chips). One master
and six slaves per row per module.
o One FPC covers one module.
63
o One PB covers one module. Digital and analogue voltage: 1.8 V.
o One half-stave corresponds to 4 modules. Total length ~ 80 cm.
o One stave corresponds to two half-staves. 8 modules in total.
o 54 staves in total.
o Data rate on the FPC data lines: 400 Mb/s
OUTER LAYERS (fig. 66 right)
o The two outer layers
o One module corresponds to 14 chips (two independent rows of 7 chips). One master
and six slaves per row per module.
o One FPC covers one module.
o One PB covers one module. Digital and analogue voltage: 1.8 V.
o One half-stave corresponds to 7 modules. Total length ~ 150 cm.
o One stave corresponds to two half-staves. 14 modules in total.
o 90 staves in total.
o Data rate on the FPC data lines: 400 Mb/s.
Figure 66: Inner Barrel (left) and Outer Barrel (right) stave configuration.
For the OB Stave, each FPC has a length of 21.6 cm and hence to cover all the half-stave we need
to glue four (middle layers) or seven (outer layers) FPCs on a Cold Plate and then connect the data
lines with micro flex bridges. This soldering is made with Sn in order to connect all the lines of a
FPC to the next one up to the end of the stave. In fig. 66 one can see this kind of soldering for two
consecutive FPCs. At the end of the stave, a FPC extender is mounted in order to transmit the data
out of the stave. The end of the FPC extender is connected to a special coaxial cable called Twinax.
This cable is the link toward the read-out electronics placed outside the ALICE barrel.
64
Figure 66: Connections between two adjacent FPCs.
65
Chapter 3: Test of the FPC for the Outer Barrel Stave
3.1 Test objectives
The general aim of the test on the FPCs for the OB Stave is to analyze the effect (noise, attenuation)
on a differential signal passing through the FPCs and test different assemblies, starting with a single
FPC up to a complete OB Stave, i.e. 7 FPCs. This is important to know the maximum data rate we
can reach along this line according to its length.
To analyze the signal the eye-diagram and the Bit Error Rate (BER) will be studied for different
data rates.
We have, as a reference, the nominal data rates for the layers of the new ITS. The test will allow us
to verify if we can transmit digital signals at this or even higher rates. We will discover this
possibility analyzing the eye-diagrams and the jitter.
Moreover, a FPC has two different types of differential lines; one type without vias and another
type with vias. We want to analyze if there are some differences in their performance.
3.2 Tested FPC The version 0 of the FPC produced in July 2014 has been tested (fig. 67). This was the first
production. These FPCs have small soldering pads (at the end) and they also have two different
types of lines:
Lines with vias.
Lines without vias.
The lines are differential. As we can see in fig. 68, the vias are holes in the FPC and we have one
via per line per FPC. The other kind of line without vias is a direct line that runs from the beginning
to the end of the FPC, without any interruption.
Figure 67: FPC-V0 (July 2014)
Figure 68: Differential line with vias
vias
66
The length and the width of this FPC are 21.6 cm and 33 mm, respectively. It is important to
underline that the lines of this FPC are identical to those of the next produced versions, so the test
performed is also valid for the latest FPC versions. The lines of this FPC are made of copper, and
they have a thickness of 17 µm, a width of 100 µm and a differential impedance of 100 Ω. The
space between two adjacent strips is 100 µm.
3.3 Experiment steps The test began with the analysis of a single FPC and then another FPC has been added at every step
until reaching a total number of 7 FPCs. Fig. 69 resumes all the steps of the entire experiment.
Figure 69: All the steps of the experiment, from 1 to 7 FPCs.
To connect one FPC to the neighbour, only wire-bonds were possible because this FPC version has
too small soldering pads. In fig. 69 the two boards used to connect the FPC to the external
instruments are clearly visible. Also with this board, the connections are implemented using the
wire-bond technique. This technique uses small wires, with a diameter of approximately 20 μm, to
connect two pads of two adjacent FPCs. In fig. 70 we can see this small wires connecting two FPCs
(left) or the FPC with the green board (right).
Figure 70: FPC-FPC (left) and FPC-board (right) connections. The small wires are more visible in the photo
on the right.
Wires
67
The impedance (1𝑛𝐻) added to the line by the wire bonds is always higher than the one added by
the flex bridges, so the test is performed in the worst possible case.
With the board it is possible to inject a differential signal in the FPC lines thanks to two SMA
cables (see fig. 69 on the top-left corner). Later we will describe the experimental setup.
3.4 Eye diagram and BER measurements
In this section the definition of the Bit Error Rate (BER) and the eye-diagram concept are presented.
Later, the experimental results related to these measures will be shown.
3.4.1 Bit Error Rate
A digital signal is a sequence of bits randomly set to 0 or 1. If we transmit it on a long wire, it can
be modified because of the attenuation on the line itself. So, if we send a data pattern through a line,
we want to know if there are some modifications in the pattern when the signal arrives at the end of
the line.
BER is the ratio between the number of errors (for example a logic 1 confused with a 0) and the
total number of bits sent in a certain time T:
𝐵𝐸𝑅 =#𝑒𝑟𝑟𝑜𝑟𝑠
#𝑏𝑖𝑡_𝑠𝑒𝑛𝑡 (15)
If we want to measure BER with a certain confidence level, we need a time T defined by the
equation:
𝑇 =− ln(1−𝐶𝐿)
𝑓∙𝐵𝐸𝑅 (16)
where f is the data rate of the signal, CL the confidence level and BER the limit for the bit error
rate. Typically this limit is chosen to be 10-12
because this is a commercial value. Clearly, if one
want a lower limit (e.g 10-16
), the time T needed for the measurement increases.
The maximum value for the BER is 0.5 (50%) because we have a binary data pattern and hence the
probability to make an error is 50% (the bit is 0 or 1).
3.4.2 Eye diagram
The data eye diagram is a methodology to represent and analyze a sequence of high speed digital
signals. The eye diagram allows key parameters of the electrical quality of the signal to be quickly
visualized and determined. The data eye diagram is constructed from a digital waveform by folding
the parts of the waveform corresponding to each individual bit into a single graph with signal
amplitude on the vertical axis and time on horizontal axis. By repeating this construction over many
samples of the waveform, the resultant graph will represent the average statistics of the signal and
will resemble and eye. The eye opening corresponds to one bit period and is typically called the
Unit Interval (UI) width of the eye diagram. An ideal digital waveform with sharp rise and fall
times and constant amplitude will have an eye diagram as shown in fig. 71.
68
Figure 71: Ideal high speed digital signal with its eye diagram.
Obviously, this ideal eye diagram offers little additional information beyond the time domain
waveform display.
Real world high speed digital signals suffer significant impairments including attenuation, noise,
crosstalk, etc. The data eye diagram for a typical high speed digital signal is shown in fig. 72.
Notice how the diagram more resembles the shape of an eye.
Figure 72: Typical high speed digital signal with its eye diagram.
For example Low Voltage Differential Signaling (LVDS) is a commonly used interface standard for
high speed digital signals. By providing a relatively small signal amplitude and tight electric and
magnetic field coupling between the two differential lines, LVDS significantly reduces the amount
of radiated electromagnetic noise and power lost to conductor resistance.
A representative eye diagram is shown in fig. 73 along with some of the typical measurements that
can be performed on the diagram.
69
Figure 73: Typical eye diagram measurements
All the measurement results are the statistical average of the samples of the waveform at the point
shown. The measurements are defined as follows:
One level: in an eye pattern it is the mean value of all logic ones.
Zero level: in an eye pattern is the mean value of all logic zeros.
Eye amplitude: is the difference between the one and zero levels.
Eye height: is a measure of the vertical opening of an eye diagram. An ideal eye opening
measurement would be equal to the eye amplitude measurement. For a real eye diagram
measurement, noise on the eye will cause the eye to close. As a result, the eye height
measurement determines the eye closure due to noise. The signal to noise ratio of the high
speed data signal is also directly indicated by the amount of eye closure.
Eye crossing percentage: the crossing level is the mean value of a thin vertical histogram
window centered on the crossing point of the eye diagram. The eye crossing percentage is
then calculated using the following equation:
𝐸𝑦𝑒𝐶𝑟𝑜𝑠𝑠% = 100 ∙ [𝑐𝑟𝑜𝑠𝑠.𝑙𝑒𝑣𝑒𝑙−𝑧𝑒𝑟𝑜 𝑙𝑒𝑣𝑒𝑙
𝑜𝑛𝑒 𝑙𝑒𝑣𝑒𝑙−𝑧𝑒𝑟𝑜 𝑙𝑒𝑣𝑒𝑙] (17)
Eye crossing percentage gives an indication of duty cycle distortion or pulse symmetry
problems in the signal. When the eye crossing symmetry value deviates from the perfect
50% point, the eye closes and thus the electrical quality of the signal is degraded.
Bit period (or UI): is a measure of the horizontal opening of an eye diagram at the crossing
points of the eye and is usually measured in picoseconds for a high speed digital signal. The
data rate is the inverse of bit period. The bit period is commonly called the Unit Interval
(UI) when describing and eye diagram. The advantage of using UI instead of actual time on
the horizontal axis is that it is normalized and eye diagrams with different data rates can be
easily compared.
Eye width: is a measure of the horizontal opening of an eye diagram. It is calculated by
measuring the difference between the statistical mean of the crossing points of the eye.
70
Rise time: is a measure of the mean transition time of the data on the upward slope of an
eye diagram. The measurement is typically made at the 20 and 80 percent or 10 and 90%
levels of the slope.
Fall time: is a measure of the mean transition time of the data on the downward slope of an
eye diagram. The measurement is typically made at the 20 and 80 percent or 10 and 90%
levels of the slope.
Jitter: is the time deviation from the ideal timing of a data-bit event and is perhaps one of
the most important characteristics of a high speed digital data signal. To compute jitter, the
time deviations of the transitions of the rising and falling edges of an eye diagram at the
crossing point are measured. Fluctuations can be random and/or deterministic. The time
histogram of the deviations is analyzed to determine the amount of jitter. The units for a
jitter measurement on a high speed digital signal are normally in picoseconds.
3.4.3 Experimental setup
In fig. 74 the experimental setup for the eye diagram analysis and BER measurements is depicted.
For the eye diagram measurements, a LVDS signal with an amplitude of 400 mV is generated by
the Agilent N5980A BERT choosing a Pseudo-Random Bit Sequence (PRBS) of 231
-1 for the data
pattern. This choice is possible because the instrument is connected to a PC via USB and here there
is a software with which the Agilent BERT can be controlled. Practically this device, thanks to the
two outputs 𝑜𝑢𝑡 and 𝑜𝑢𝑡 , generates a pseudo-random sequence of digital 0s and 1s and 𝑜𝑢𝑡 is
simply the complementary output of 𝑜𝑢𝑡.
Figure 74: Experimental setup for Eye Diagram (top) and BER (bottom) measurements
71
The software permits also to choose the data rate (nine fixed values). Then the differential signal
passes through two SMA cables and reaches the FPCs. The FPC differential lines have an
impedance of 100 Ω. At the end of the line composed from 1 to 7 FPCs, the signal arrives to the
oscilloscope. The oscilloscope analyzes automatically the LVDS signal and creates the eye diagram
with an estimate of the various parameters described in the previous paragraph.
For BER measurements, the experimental setup is similar but the oscilloscope has not been used
because, thanks to the software installed on the PC, the Agilent BERT recognize automatically the
errors in the data pattern. In fact, the signal goes out from 𝑜𝑢𝑡 and 𝑜𝑢𝑡 , passes through the FPC line
and then arrives to 𝑖𝑛 and 𝑖 inputs; so the device knows what it has sent and sees the final signal
recognizing the eventual errors.
In fig. 75, it is possible to see a photography of the experimental setup.
Figure 75: Photography of the experimental setup with only one FPC.
In the FPC we have also a ground for the µ-strips that is situated on the opposite side with respect to
the strips. At the end of each FPC, in the opposite side to the small pads, there is a large pad for the
FPC-FPC ground connection. This is important to guarantee the continuation of the ground when
we have more than one FPC.
3.4.4 Measurement procedure
The instrument Agilent BERT permits to choose nine different data rates for an LVDS signal:
0.125, 0.156, 0.622, 1.06, 1.25, 2.13, 2.49, 2.67, 3.13 Gb/s. To check the 400 Mb/s rate we used a
different set-up. In table 7 it is possible to see a summary of the length of the various staves and
their data rates that are a reference for our measurement. One FPC has a length of 21.6 cm (outer
barrel).
Concerning the eye diagram measurements, we have measured the eye width and the eye height for
each data rate (ten samples per data rate). The estimate of the eye width and the eye height for a
certain data rate is the mean of these ten samples, the error associated to the mean value is the semi-
difference between the maximum and the minimum values.
72
Layer N° FPCs Length Ref. transm. data rate
Inner < 2 30 cm 1.2 Gb/s
Middle 4 80 cm 400 Mb/s
Outer 7 150 cm 400 Mb/s
Table 7: Summary of the layer characteristics.
Passing to the BER measurements, according to equation 16, we have chosen a confidence level of
95% and a limit for the BER of 10-12
that is a commercial value. With this choice, we need a certain
time to complete the measurement of the BER for each data rate. This time is calculated with
equation 16 for each data rate. Table 8 gives an idea of the time needed to reach a confidence level
of 95% for each data rate.
Data Rate (Gb/s) Time (h) 0.125 6.66
0.156 5.33
0.622 1.34
1.06 0.785
1.25 0.666
2.13 0.391
2.49 0.333
2.67 0.312
3.13 0.266
Table 8: Time needed for a BER measurement to reach a confidence level of 95%.
Another important parameter measured is the so-called gating errors that is the absolute number of
errors, in the data pattern, occurring during the measurement time.
3.4.5 Eye diagram and BER main results
INNER LAYER SET-UP
The reference data rate for the inner layers is 1.2 Gb/s. To simulate them, we have used two FPCs
that correspond to a total length of about 40 cm. The real length of a FPC for the inner layers is 30
cm, but 40 cm it was the closest possible length we had. A LVDS signal pattern with a data rate of
1.25 Gb/s has been chosen for the measurement. We have analyzed the eye diagram for the line
with vias and for the line without vias.
As we can see in fig. 76 the eyes for both lines are very open and we don’t see any differences in
the two eye diagrams.
Moreover, the BER measured with the Agilent BERT is under the BER limit of 10-12
, so no errors
were detected in the data pattern (gating errors = 0). This is a good result because it means that we
can transmit a differential signal through a FPC of 30 cm without problems in terms of attenuation
or errors in the data pattern.
73
Figure 76: Eye diagrams for the line with (left) and without (right) vias at 1.25Gb/s with 2 FPCs.
MIDDLE LAYERS SET-UP
The reference data rate for the middle layers is 400 Mb/s. To simulate them, we have used four
FPCs that correspond to a total length of about 80 cm. A LVDS signal with a data rate of 622 Mb/s
has been chosen for the measurement. We haven’t had the possibility to use a data rate of 400 Mb/s,
so we have chosen the nearest one we had available. The data rate chosen is greater than the
reference one, so if we obtain an eye with a good opening, for sure the results are also valid for a
data rate of 400 Mb/s that is lower. The eye diagrams for the line with vias and for the line without
vias have been analyzed.
Figure 77: Eye diagrams for the line with (left) and without (right) vias at 622Mb/s with 4 FPCs
As we can see in fig. 77 the eyes for both lines are very open and we don’t see any differences in
the two eye diagrams.
As for the IB case, the BER measured with the Agilent BERT is under the BER limit of 10-12
, so no
errors were detected in the data pattern (gating errors = 0).
74
OUTER LAYERS SET-UP
The reference data rate for the outer layers is 400 Mb/s. For the OB setup, we have used seven
FPCs that correspond to a total length of about 150 cm. A LVDS signal with a data rate of 400 Mb/s
has been chosen for the measurement (the reference one). This signal has been generated with
another instrument (Agilent 81133A) that is more complex (it is programmable) than the Agilent
BERT. For the previous tests we haven’t had the possibility to use it because it was used for another
experiment in the laboratory. The eye diagrams for the line with vias and for the line without vias
have been analyzed.
Figure 78: Eye diagrams for the line with (left) and without (right) vias at 400Mb/s with 7 FPCs
As we can see in fig. 78 the eyes for both lines are very open.
Also for this extreme length case, the BER measured with the Agilent BERT is under the BER limit
of 10-12
, so no errors were detected in the data pattern (gating errors = 0).
We indeed found a problem in the eye diagram on the right: as we can see, the eye diagram presents
a sort of a splitting in its bands; this is a phenomenon known as Intersymbol Interference (ISI).
Practically the wave shape of a given bit interferes with the preceding N bits and with the following
ones.
Figure 79: A representation of the ISI phenomenon.
The ISI depends on the characteristics of the transmission medium. From fig. 79, if we have for
example a bit sequence 101010, it has an higher frequency (variation) than a sequence 110011; high
frequency means more attenuation and so, when we build the eye diagram, the various sequences
have different shapes due to the attenuation causing the intersections of the signal pieces (with
which the eye diagram is created) in different points (ISI in fig. 79).
75
BER RESULT DETAILS
As described in the previous paragraphs, the BER is always under its limit for the reference data
rates. We have analyzed also signals with higher data rates and in this paragraph the BER
measurements for all data rates and all steps (from 1 to 7 FPCs) are presented in table 9.
N° FPC Data rates (Gb/s) with
BER<10-12
Data rates (Gb/s)
with BER>10-12
BER>10-12
values
(line with vias)
BER>10-12
values
(line w/o vias)
1 All from 0.13 to 3.13 None None None
2 All from 0.13 to 3.13 None None None
3 0.13; 0.156; 0.622; 1.06;
1.25; 2.13; 2.49; 2.67
3.13 1.14 ∙ 10−7 7.265 ∙ 10−8
4 0.13; 0.156; 0.622; 1.06;
1.25; 2.13
2.49
2.67
3.13
1.403 ∙ 10−11 3.646 ∙ 10−8 2.216 ∙ 10−3
1.371 ∙ 10−11
5.169 ∙ 10−8 2.312 ∙ 10−3
5 0.13; 0.156; 0.622; 1.06;
1.25
2.13
2.49
2.67
3.13
9.962 ∙ 10−9 4.159 ∙ 10−4 3.338 ∙ 10−3 3.213 ∙ 10−2
3.789 ∙ 10−8 4.785 ∙ 10−4 3.785 ∙ 10−3 3.499 ∙ 10−2
6 0.13; 0.156; 0.622 1.06
1.25
2.13
2.49
2.67
3.13
4.215 ∙ 10−10 5.606 ∙ 10−9 7.513 ∙ 10−3 2.639 ∙ 10−2
4.122 ∙ 10−2
5.000 ∙ 10−1
1.013 ∙ 10−11 7.193 ∙ 10−10 5.250 ∙ 10−3 2.670 ∙ 10−2
4.121 ∙ 10−2
9.041 ∙ 10−2
7 0.13; 0.156; 0.622 1.06
1.25
2.13
2.49
2.67
3.13
2.183 ∙ 10−9 1.472 ∙ 10−7 2.992 ∙ 10−2 6.379 ∙ 10−2
5.000 ∙ 10−1
5.000 ∙ 10−1
6.614 ∙ 10−10 9.203 ∙ 10−8 3.054 ∙ 10−2 6.635 ∙ 10−2
8.583 ∙ 10−2
5.000 ∙ 10−1
Table 9: All BER measurements.
From these results, we can notice that the performance of the lines with and without vias is very
similar in terms of BER. Sometimes the channel without vias is better, but the difference is really
small. We can thus conclude that the presence of vias along the line does not negatively affect the
data transmission.
EYE DIAGRAMS FROM 1 TO 7 FPCs
In this section, we show the results of the eye diagram on a line with vias, at 622 Mb/s, for 7
different line length. The 7 different graphs are shown in fig. 80.
As it is possible to notice, the eye becomes more closed in both width and height increasing the
number of FPCs. This property is analyzed in the next two graphs. Anyway, even with 7 FPC, at
622 Mb/s, the eye is very open.
76
Figure 80: Eye diagrams from 1 to 7 FPCs at 622 Mb/s for the line with vias.
EYE WIDTH AND EYE HEIGHT ANALYSIS
In fig. 81, the eye width as a function of the data rate for the line without vias is shown. Only the
cases with 1 and 7 FPCs have been chosen because the eye width don’t present a great difference in
the cases with 2, 3, 4, 5, 6 FPCs at a fixed data rate.
Figure 81: Eye width vs. Data Rate for the line without vias.
77
We can see that the eye width decreases with the data rate as expected and, at a fixed data rate, it is
smaller in the case with 7 FPCs when the data rate is greater than 500 Mb/s. We can’t see the blue
points for all data rates because the eye, for data rates greater than 2 Gb/s, is too closed to measure
the eye width. The case with the line with vias is very similar and so it is not shown.
In fig. 82, the eye height as a function of the data rate for the channel without vias is shown. We can
see that the eye height decreases with the data rate as expected and, at a fixed data rate, it decreases
increasing the number of FPCs. We can’t see the points corresponding to a number of FPCs greater
than 4 for all data rates, because in these cases the eye was too closed to measure the eye height.
The case with the line with vias is very similar and so it is not shown.
Figure 82: Eye height vs. Data Rate for the line without vias.
Again, from these measurements, we can conclude that the performance of the two lines are
practically the same in terms of eye width and eye height.
3.5 Jitter measurements In this section, the measurements related to the jitter will be presented. These measurements are
directly linked to the eye diagram measurements, but they deserve a separate analysis.
3.5.1 A jitter definition and parametrization
A jitter definition has already been given in paragraph 3.4.2. In summary, the jitter is a “short term
variation” from an ideal timing of an event; it is a sort of a noise in the horizontal direction (time).
In fig. 83 we can see the so-called periodic jitter.
0,00
100,00
200,00
300,00
400,00
500,00
600,00
700,00
800,00
0,000 0,500 1,000 1,500 2,000 2,500 3,000
Eye
He
igh
t (m
V)
Data Rate (Gb/s)
Eye Height vs Data Rate
1FPC(noVia)
2FPC(noVia)
3FPC(noVia)
4FPC(noVia)
5FPC(noVia)
6FPC(noVia)
7FPC(noVia)
78
Figure 83: Representation of the periodic jitter.
The jitter is a statistical phenomenon and it includes instability in signal period and frequency.
Typically it is parametrized using the randomic jitter and the deterministic jitter. These two
components are used to define the total jitter as follows:
𝑇𝑗 = 𝛼 ∙ 𝑅𝑗 + 𝐷𝑗 (18)
Where Rj is the randomic jitter, Dj the deterministic jitter and 𝛼 is a parameter that depends on the
BER limit. In the case of a BER limit of 10-12
, we have 𝛼 = 14.069.
Usually the jitter is expressed in picoseconds, but sometimes we can find it expressed in UI.
The total jitter is directly linked to the eye diagram measurements (see fig. 73) and it is calculated
automatically by the oscilloscope.
3.5.2 Experimental setup and procedure
For the jitter measurements we have used the same experimental setup as in the case of the eye
diagram measurement (see fig. 74 (top)).
Again we have acquired ten samples of jitter measurement for each data rate and then the mean of
these values has been calculated. The error associated to the mean value is simply the semi-
difference between the maximum and the minimum values.
We have fixed the jitter limit at 0.3 UI (30% of the entire UI) because it is a limit often used in
commercial applications. Hence, if we find a jitter greater than 0.3 UI for a certain data rate, it
means that the “noise” on the signal is too high and so its transmission through the FPC line it is not
acceptable.
It is important to underline that also the Agilent BERT generates a jitter. Its contribute is known and
it will be subtracted in quadrature from the total jitter measured by the oscilloscope. The resultant
value for the jitter will be plotted in the graphs of the next paragraphs.
3.5.3 Experimental results
In this paragraph, the main experimental results concerning the jitter measurements will be
presented and commented.
INNER LAYERS
In fig. 84 the Total Jitter vs. Stave Length is plotted for both the channels analyzed, with and
without vias, at a data rate of 1.25 Gb/s (reference data rate for the inner layers). We can see that
jitter increases with the number of FPCs (stave length), but no evident difference is present in the
different lines.
The real inner layer stave length is 30 cm and we can notice that at this length we are very well
under the limit imposed for the jitter (red horizontal line). Moreover, we can see that we can
transmit a signal with a data rate of 1.25 Gb/s also through a FPC line with a length of about 80 cm
(middle layers).
79
Clearly this is true if we accept a jitter limit of 0.3 UI = 240ps. In case the limit is set to 0.2 UI =
160 ps, we can still transmit a signal at 1.25 Gb/s along a 30 cm long line. For the outer layers
(stave length = 150 cm) we are highly above the jitter limit in any case.
Figure 84: Total Jitter vs. Stave Length at a data rate of 1.25 Gb/s for the channels with and without vias.
MIDDLE LAYERS
In fig. 85 the Total Jitter vs. Data Rate is plotted for both the channels analyzed, with and without
vias, in the case with four FPCs. As we can see there aren’t any differences between the two lines in
terms of jitter. This increases with the data rate, as expected.
Figure 85: Total Jitter (UI) vs. data rate in the case with 4 FPCs (middle layers).
0,00
100,00
200,00
300,00
400,00
500,00
600,00
0 50 100 150 200
Jitt
er
(ps)
Stave length (cm)
Jitter vs n°FPC @1,25Gb/s Comparison via/no via
ch with vias
ch without vias
Jitter limit
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,000 0,500 1,000 1,500 2,000 2,500 3,000 3,500
Jitt
er (
UI)
Data Rate (Gb/s)
Jitter (UI) vs Data Rate (4 FPC) Comparison Vias/No vias
ch without vias
ch with vias
Jitter limit
80
The reference data rate for the middle layers is 400 Mb/s and we can notice that at this data rate we
are abundantly under the limit imposed for the jitter (red horizontal line). Moreover, we can see
that, with four FPCs, we can also transmit a signal with a data rate greater than 400 Mb/s (up to
approximately 1.3 Gb/s) without going beyond the jitter limit.
Clearly this is true if we accept a jitter limit of 0.3 UI. If we move the limit to 0.2 UI, we can reach
a maximum data rate of about 1 Gb/s.
OUTER LAYERS
In fig. 86 the Total Jitter vs. Data Rate is plotted for both the channels analyzed, with and without
vias, in the case with seven FPCs. As we can see that there aren’t any differences between the two
lines in terms of jitter that increases with the data rate as expected. We couldn’t go beyond a data
rate of 1.2 Gb/s because the eye was too closed and so the oscilloscope couldn’t measure the jitter.
The reference data rate for the outer layers is 400 Mb/s and we can notice that at this data rate we
are abundantly under the limit imposed for the jitter (red horizontal line). Moreover, we can see
that, with seven FPCs, we can also transmit a signal with a data rate greater than 400 Mb/s (up to
approximately 650 Mb/s) without go beyond the jitter limit.
Clearly this is true if we accept a jitter limit of 0.3 UI. If we move the limit to 0.2 UI, we can reach
a maximum data rate of about 500 Mb/s.
Figure 86: Total Jitter (UI) vs. data rate in the case with 7 FPCs (outer layers).
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,000 0,500 1,000 1,500
Jitt
er
(UI)
Data Rate (Gb/s)
Jitter vs Data Rate (7 FPC) Comparison Vias/No vias
ch without vias
ch with vias
Jitter Limit (0,3 UI)
81
DETAILS ON JITTER AT 622 AND 400 Mb/s
In fig. 87 the Total Jitter vs. Number of FPCs is plotted in the case of a signal with a data rate of
622 Mb/s. For 7 FPCs we have had the possibility to add the points at 400 Mb/s that is the reference
data rate for the outer layers. Again, we can see that there aren’t any differences concerning jitter
performance in the two types of line. The detailed analysis for the three kinds of layers has already
been done in the previous paragraphs.
Figure 87: Total Jitter vs. Number of FPCs with a signal of 622 and 400 Mb/s in the cases of a line with and
without vias. On the x axis, 2 FPCs means approximately Inner Layers; 4 FPCs Middle Layers and 7 FPCs
Outer Layers.
JITTER: MEASUREMENTS SUMMARY
In fig. 88 the Total Jitter vs. Data Rate for all the cases concerning the number of FPCs and the two
different lines is shown. We can see that the jitter measured for the two lines is compatible within
the error bars. With 5, 6 and 7 FPCs there some missing points at high data rates because, for these
data rates, the eye was too closed to allow measuring the jitter.
0,00
50,00
100,00
150,00
200,00
250,00
300,00
350,00
400,00
0 1 2 3 4 5 6 7 8
Jitt
er (
ps)
n° FPC
Jitter (ps) vs n°FPC @622- 400 Mb/s Comparison vias/no vias
ch with vias
ch without vias
400Mb/s (via)
400Mb/s (no via)
82
Figure 88: Total Jitter vs. Data Rate for all numbers of FPCs, in the case of a line with (top) and without
(bottom) vias.
3.6 Conclusions The eye diagram and jitter measurements have been presented in the previous paragraphs for all the
interesting cases regarding the features of the new ITS.
18,00
68,00
118,00
168,00
218,00
268,00
318,00
368,00
418,00
468,00
0,000 0,500 1,000 1,500 2,000 2,500 3,000
Jitt
er
(ps)
Data Rate (Gb/s)
Jitter vs Data Rate (with via)
1 FPC (via)
2 FPC (via)
3 FPC (via)
4 FPC (via)
5 FPC (via)
6FPC (via)
7FPC (via)
10,00
60,00
110,00
160,00
210,00
260,00
310,00
360,00
410,00
460,00
0,000 0,500 1,000 1,500 2,000 2,500 3,000
Jitt
er
(ps)
Data Rate (Gb/s)
Jitter vs Data Rate (without via)
1FPC no via
2FPC no via
3FPC no via
4FPC no via
5FPC no via
6FPC no via
7FPC no via
83
In the test of the FPCs we have seen that the lines with and without vias have the same behavior in
terms of eye opening, BER and jitter. In laboratory, during the analysis of the eye diagrams, we
have noticed a small edge effect on the differential lines of the FPC: the strips on the edge “see”
less ground (on the other side of the FPC) with respect to the ones in the middle; this fact causes a
small difference in the performance of the line with vias which is the one near to the edge (the lines
without vias run in the middle of the FPC). But, this worsening in the performance for the line
closer to the edge is small (10-50 mV for the eye height, 10-20 ps for the jitter and about 0 s for the
eye width) so there aren’t any problems in terms of eye opening and jitter for these lines.
We can transmit a signal in the FPC lines with data rates equal to the reference ones, and these data
rates can even be increased, still allowing for a jitter below commercial limit.
84
Chapter 4: Study of the chip analog and digital parameters
In this chapter the laboratory tests on the chip pALPIDEfs-v1 will be presented. This is the first
version of the ALPIDE family. We start with a summary on the chip structure, and some details on
its integrated electronics that are important for the tests performed.
4.1 pALPIDEfs-v1 summary and pixel organization
The pALPIDEfs chip, as already said in chapter 2, is a particle detector based on Monolithic Active
Pixels Sensor technology. It is implemented in a 180 nm CMOS technology for CMOS Imaging
Sensors. A general block diagram of the pALPIDEfs is given in fig. 89.
Figure 89: General block diagram of the pALPIDEfs chip.
85
The chip measures 15.3 mm (Y) by 30 mm (X) and contains a matrix of 512 (Y) × 1024 (X)
sensitive pixels. The pixels are 28×28 𝜇𝑚2. A periphery circuit region of 1×30 mm2 is also
present. The pixel columns are numbered from 0 to 1023 going from left to right as illustrated in the
figure. Pixel rows are numbered from 0 to 511 going from the top to the bottom of the sensitive
matrix.
There are four sub-matrices (sectors) of 512×256 pixels, each matrix being composed by almost
identical pixels. The four flavors of pixels (S1, S2, S2_DR (Diode Reset) and S4) differ by the size
of the charge collection diode and by the way the diode reset is implemented. The characteristics of
each sector will be discussed in detail in the following paragraphs. Each pixel features a low-power
front-end with binary (discriminated) output. The front-end is non-linear with a shaping of around 2
µs. The assertion of a STROBE/TRG signal during the response interval following an event of
charge release in the pixel causes the latching of the discriminated output into an in-pixel storage
cell. The pixels feature a built-in test pulse injection circuit triggered by an external signal
(PULSE). A digital-only test pulse mode is also available, forcing the writing of a logic one in the
in-pixel memory cell.
The hits stored in the pixels are read out by means of Priority Encoder circuits (paragraph 2.6.4).
These provide the address of a pixel with a hit based on a topological priority. In consecutive read
cycles the selected pixels are reset and the addresses of subsequent pixels with hits are generated.
This continues until all hits at the inputs of the Priority Encoders are read out. The readout of the
sensitive matrix to the periphery is zero-suppressed and digital power is consumed only to transfer
hit information to the periphery.
The matrix is organized in 32 regions (512×32 pixels), each of them with 16 double columns being
read out by 16 priority encoder circuits (there is one priority encoder logic for each double column,
see fig. 50). The hits inside one region are read out sequentially in consecutive readout cycles. The
readout of regions is executed in parallel and it is driven by state machines in the region readout
blocks (see fig. 54). The region readout units also contain multi-event storage SRAM memories and
data compression functionality based on clustering by adjacency. The data from the 32 region
readout blocks are combined and transmitted off-chip by a top level Chip Readout unit. Hit data are
transmitted on a parallel 8-bit output data port using CMOS signaling.
A top-level Control block provides full access to the control and status registers of the chip as well
as to the multi-event memories in the region readout blocks. The slow control interface implements
a JTAG communication protocol.
All the analog signals required by the front-ends are generated by a set of 11 on-chip DACs (Data
Analogue Converters). The DACs require only one external low-noise voltage reference (VREF)
nominally at the potential of the analog supply. Analog monitoring pads (DACMONV,
DACMONI) are available to monitor the outputs of the internal DACs. The same pads can be used
to override one of the voltage DACs, one of the current DACs or to override the internal reference
current used by all current DACs.
In table 10, we can see the interface signals for the pALPIDEfs chip indicated in fig 89.
86
Table 10: Interface signals for pALPIDEfs chip
4.2 In-pixel structure In fig. 90 the in-pixel structure is depicted. The final pixel size (pALPIDE-v3) will be 29.250𝜇𝑚 ×
26.880𝜇𝑚 (width×height).
Figure 90: In-pixel block diagram (left) and electronics (right).
87
In the figure we can see the charge collection electrode, the pixel logic (internal registers and
memories), the AERD and the front-end electronics. The latter comprehends the digital and
analogue circuitry that will be described in the next paragraphs together with some details on the
charge collection electrode. This in-pixel architecture has a total number of 150 transistors.
4.2.1 Analog front-end section
The pixel front-end circuit is shown in fig. 91. There are two front-end implementations which
differ for the collecting diode (D1) reset circuitry:
PMOS reset scheme (pixel in sector S1, S2, S4). VRESET establishes the reset voltage of
the charge collecting node (pix_in) and IRESET defines the maximum reset current.
Diode reset scheme (pixel in sector S2_DR also called S3). The VAUX DAC establishes
the reset voltage of the charge collecting node (pix_in).
When a particle hit is received the front-end will increase the potential at the input of transistor M5
(pix_out), forcing it into conduction. If the current in M5 overcomes IDB, M5 will drive
PIX_OUT_B low.
Figure 91: Schematic of the pixel analog front-end(s).
The charge threshold of the pixel is defined by ITHR, VCASN and IDB. The effective charge
threshold is increased by increasing ITHR or IDB. It is decreased by augmenting VCASN. The
active low PIX_OUT_B signal is applied to the digital section of the pixel where it is used to set the
hit status register.
It is possible to inject a test charge in the input node for test purposes. This is achieved by applying
a voltage pulse of controllable amplitude to the VPULSE pin of the Cinj capacitor. This is controlled
by the digital section of the pixel (see next paragraph).
4.2.2 Digital front-end section
The digital section of the pixel is illustrated in fig. 92. The corresponding signals are listed in table
11. The hit information is kept by a Set-Reset latch (STATE_INT). This bit can be masked and the
result is the output to the Priority Encoder (STATE signal). The latch is normally set by the front-
end discriminated output PIX_OUT_B if STROBE_B is asserted simultaneously. It can also be set
88
programmatically by the DPULSE signal (digital pulse functionality). The latch is reset either by a
PIX_RESET pulse generated by the Priority Encoder during the readout, either by a global PRST
pulse applied to the chip PRST input pin. The latch is sensitive to the falling edge of PIX_RESET
and it is level sensitive with respect to the PRST input.
Figure 92: Functional diagram of the pixel digital front-end.
The logic provides two programmable functions: masking and pulsing.
When control bit MASK_EN is set high, the STATE output is forced to 0, effectively masking the
pixel output to the priority encoder. The low value provides normal functionality.
The PULSE_EN control bit and the PULSE_TYPE input (set globally) control the type of pulsing
(in their relative multiplexer). The testing functionalities are enabled by setting PULSE_EN=1,
disabled otherwise. Digital pulsing is selected by PULSE_TYPE=0, analog pulsing is activated by
PULSE_TYPE=1. In both cases the global signal PULSE, applied to one of the chip input pins, acts
as trigger of the test pulse.
The digital testing consists in forcing to logic high the hit latch (STATE_INT), bypassing the pixel
front-end and the STROBE_B signal.
The analog testing consists in the injection of test charge in the input node through Cinj (160 aF
nominal). The effective charge injected is defined by (𝑉𝑃𝐿𝑆𝐸𝐻𝐼𝐺𝐻 − 𝑉𝑃𝐿𝑆𝐸𝐿𝑂𝑊) ∙ 𝐶𝑖𝑛𝑗. Notice that
the two edges of the pulse provoke the injection of two charge pulses of opposite polarities. The
rising edge of PULSE corresponds to the discharge of the collection diode, in a manner equivalent
to the passage of a charged particle.
There are two D-latches to store the PULSE_EN and MASK_EN configuration bits. Notice that
their values after power-on are undefined. Setting of these latches is done by the
PIXCNFG_COLSEL, PIXCNFG_ROWSEL, PIXCNFG_REGSEL, PIXCNFG_DATA lines, all
driven by the periphery control circuitry. The addressing of the pixels for configuration is based on
the simultaneous selection of a specific row and a specific column. PIXCNFG_REGSEL
determines which of the two latches is to be written. PIXCNFG_DATA provides the value to be
stored in the selected latch. The simultaneous assertion of PIXCNFG_COLSEL and
89
PIXCNFG_ROWSEL pixel inputs enables the selected latch. There is no direct way to read back
the values in the latches from the control interface.
Table 11: Signals of the pixel cell.
4.3 Pixel indexing The pixel matrix is readout by an array of 512 Priority Encoder blocks (one for each double
column). The pixels are arranged in double columns and the regions at the middle of each double
column are occupied by the Priority Encoders. The indexing of the pixels in each double column is
illustrated in fig. 93.
Figure 93: Indexing of pixels inside a double column provided by the Priority Encoders.
90
This particular order of the pixels is due to the fact that it most probable to have a hit in a pixel
away from the edge than a pixel closer to the edge; hence there is the priority to read first a pixel
situated in a more central position in the matrix. This order of the pixel will be important for the test
performed on the chip.
4.4 Pixel sector details
We have already mentioned that the entire matrix of 512×1024 pixels is divided in four different
sectors. The pixels in each sector are functionally identical, but they differ on some parameters
regarding the charge collection electrode and the reset mode of the pixel itself. In table 12, the
characteristics of each sector are listed. The meaning of the parameters in the table is visible in fig.
94. Concerning the reset mode, its implementation (DIODE/PMOS) can be seen in fig 91.
Figure 94: Charge collection electrode.
Table 12: Charge collection electrode parameters for each sector of the pixel chip (ALPIDE-1).
4.5 Test boards For the test of the pALPIDEfs-v1 chip, a DAQ and carrier boards have been designed. In fig. 95 we
can see these two boards connected together. The carrier board holds the pALPIDEfs-v1 chip that is
electrically connected to it with wire bonds as can be seen in fig. 96. The details of the carrier board
features will be described during the various phases of the chip test.
91
Figure 95: Carrier (black) and DAQ (green) board connected together. The ALPIDE chip is under the
protection with the hexagonal holes (on the left).
Figure 96: pALPIDE-v1 chip connected to the carrier board.
Concerning the DAQ board, its various electronics parts can be seen in fig. 97. As we can see, the
board is powered with a voltage of 5V and it has a USB connection that permits to connect it to a
PC.
Figure 97: DAQ board.
92
A software with different test functions is installed on the PC; its functionalities will be described in
the next paragraph. On the board there are also the potentiometers to vary the analogue and digital
power (VDDA and VDDD, respectively) that are nominally set at 1.8V, the connector for the
carrier board, the LEMO connector for the application of a back bias to the chip substrate, the
FPGA chip (the biggest black chip in the middle) with its connector that permits to program it, the
RESET button. The DAQ board in the picture is also compatible with pALPIDEfs-v2.
4.6 Chip software The DAQ board is connected via USB to a computer on which a specific software is installed. In
this paragraph, some of its main test functions are described. Each test is preceded by a powering-
on and configuration of the chip. After each of the two steps the current consumptions are measured
and printed on screen. After each test the chip is powered down (this is not the case if the program
crashes or is interrupted). Here a list of the main test functions:
FIFO Test: is a quick test to check the JTAG communication with the chip. It writes three
different bit patterns (0x0000, 0xffff and 0x5555) into each cell of the end-of-column
FIFOs, reads them back and checks the correctness or the read-back values. The test is
started by passing the parameter FIFO to the program:
./runTest FIFO
On-chip DAC Tests:
o READDAC test: it loops over all chip DACs, measures their output once and prints
the measured values to screen:
./runTest READDACS
o SCANDACS test: it measure the full DAC characteristics. For each DAC it loops over
the values from 0 to 255 and measures the output values. The measured values are
written into a file for each DAC. In these files we can find two columns of numbers:
the first one has the numbers from 0 to 255 and the second one has the correspondent
voltages or currents (it depends on the DAC). In other word, these files permits to
calculate the conversion factor from DAC unit (first column) to volt or ampere
(second column) for each DAC. We have a file for VAUX, VPULSEH,…,IDB,
ITHR and so on (see fig. 91).
Digital Scan: it generates a digital pulse in a number of pixels and reads the hits out. It is
started with two parameters:
./runTest SCANDIGITAL PAR1 PAR2
where PAR1 is the number of injections per pixel, PAR2 the number of mask stages. The
mask stages are equal to the number of pixels to analyze (injecting a digital pulse) for each
region of the pixel matrix. For example if PAR1=50 and PAR2=160, the function injects a
digital pulse in 160 pixels per region (1% of the pixels), 50 times for each pixel. Clearly, if
we want to analyze all pixels in the matrix, PAR2 has to be set to 512*32 (number of pixels
in one region).
The output data is written into a file DigitalScan.dat, each line has the format
Doublecol Address NHits
93
with Doublecol ranging from 0 to 511 (number of double columns in the pixel matrix),
Address from 0 to 1023 (number of the pixel in the selected double column, it is not the row
number). NHits is the number of hits counted for the selected pixel.
Analogue Scan: it works similar to the digital scan, however instead of generating a digital
pulse after the discriminator, a programmable charge is injected into the preamplifier. The
scan therefore requires an additional parameter:
./runTest SCANANALOGUE PAR1 PAR2 PAR3
with PAR1 being the charge in DAC units (1 DAC=7 𝑒−), PAR2 the number of injections
per pixel and PAR3 the number of mask stages. The output file format is identical to the one
of the digital scan (filename AnalogueScan.dat).
Threshold Scan: it performs analogue injections, looping over the charge. For each charge
point and for each pixel 50 injections are performed. The scan is started with
./runTest THRESHOLD PAR1 PAR2 PAR3 [PAR4 PAR5]
with the number of mask stages PAR1 and the charge loop ranging from PAR2 to PAR3
(both in DAC units). Parameters 4 and 5 are optional parameters to perform the test at a
specific setting of VCASN and ITH different than the default setting.
The output file ThresholdScan.dat contains the raw data, i.e. the number of hits for each
charge point, in the format
Doublecol Address Charge NHits
Noise Occupancy Scan: a scan over a range of VCASN and ITH that measures the number
of noise hits for each point. It can be started with:
./runTest NOISEOCCSCAN PAR1 … PAR5 [PAR6]
The meaning of the parameters is (in order): number of events per point, VCASN range low
and high, ITH range low and high and an optional mask file. Practically this test sends
PAR1 triggers to the chip and for each of them it counts the noise hits coming from the
pixels. Typically PAR1=106 events for a good test.
The lines of the output file are in the format:
VCASN ITH HitSector0 HitSector1 HitSector2 HitSector3
4.7 Test objectives The results that will be presented and commented in the following paragraphs are relative to the
chip pALPIDE-v1.
Here follows a very general description of the tests performed on pALPIDE-v1 with their
objectives.
Threshold Scan and Noise Occupancy Scan at different supply voltages: the aim of this test
is to do these two kind of scans varying the digital and analogue voltages with the
potentiometer on the DAQ board and then see the results about the thresholds, noise and
noise occupancy.
Threshold Scan and Noise Occupancy Scan at different voltages without the decoupling
capacitors on the digital and analogue voltages: the removal of these decoupling capacitors
can induce a modification of the chip performance regarding thresholds, noise and noise
occupancy. The objective is to test if there are these changes.
94
Threshold Scan and Noise Occupancy Scan at different voltages without the filter on the
reference voltage (VREF) for the DACs: the removal of this filter (an inductance) can
induce a modification of the chip performance regarding thresholds, noise and noise
occupancy. The objective is to test if there are these changes.
Noise injection: here the scope is to inject a sinusoidal signal in the analog and digital
power, and in the back bias input on the DAQ board. We want to analyze the crosstalk
varying the frequency of the noise signal from 1 kHz to 100 MHz.
In the next paragraphs a more detailed description of these tests will be given.
4.8 First test: Threshold and Noise Occupancy Scan at different
voltages The first test consists in the calculation of the thresholds, noise and noise occupancy for each sector
of the chip varying the analogue and digital supply voltages.
4.8.1 Test procedure
The analogue (VDDA) and digital (VDDD) supply voltages are varied in the range [1.62 – 1.98] V
analyzing the following voltages: 1.62; 1.70; 1.80 (nominal); 1.90; 1.98 V. These voltages can be
set using the potentiometers on the DAQ board.
The entire test is divided in three steps:
1. Calculation of thresholds, noise and noise occupancy varying VDDD in the range mentioned
above maintaining VDDA equal to 1.8V.
2. Calculation of thresholds, noise and noise occupancy varying VDDA in the same range
maintaining VDDD to 1.8V.
3. Calculation of thresholds, noise and noise occupancy varying VDDD and VDDA in parallel
in the same range.
THRESHOLD SCAN PROCEDURE
For the Threshold Scan test we have chosen the following parameters:
Analysis of the 30% of the pixel (4916 pixels for each region)
Charge looping from 20 to 49 DACs.
Once the test is finished, the software creates a file ThreholdScan.dat as described in paragraph 4.6.
This file is read and analyzed by a ROOT macro to calculate the threshold and noise for each sector.
In the output file we have the hits for all the pixels analyzed as a function of the injected charge. For
example, if we take one pixel, in ThresholdScan.dat we can find the numbers listed in table 13.
The macro read the charge and hits column and creates a plot with the charge on the x axis and the
number of hits on the y axis. As it is possible to imagine we obtain an S-curve that will be fitted
with a gaussian error function (erf()) with two parameters: threshold and noise. The fit is shown in
fig. 98.
95
Double col Address Charge (DAC) NHits
0 0 20 0
0 0 21 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 29 0
0 0 30 3
0 0 31 10
0 0 32 20
0 0 33 42
0 0 34 50
0 0 35 50
.
.
.
.
.
.
.
.
.
.
.
.
0 0 48 50
0 0 49 50
Table 13: Example of the output in ThresholdScan.dat for one pixel
The threshold parameter to initialize the fit function is calculated as the semi-sum between the inner
charge value with 0 hits and the first one with 50 hits. In this example:
𝑇ℎ𝑟𝑒𝑠𝑖𝑛𝑖𝑡 =29+34
2 [𝐷𝐴𝐶𝑠] =
29+34
2 ∙ 7 [𝑒−] (19)
We can notice that the conversion factor from DAC to electrons is: 1 DAC = 7 𝑒−.Than the noise is
initialized to 8 electrons. The fit is performed and the threshold and noise for the pixel (0-0 in the
example) are reread directly from the fit.
Figure 98: Hits vs. Charge plot fitted with the error function (red curve) for the pixel noise and threshold
calculation.
The error function for the calculation of noise and threshold is:
𝐸𝑟𝑓(𝑥) =50
2∙
2
√𝜋∫ 𝑒−𝑡2
𝑑𝑡𝑎(𝑥)
0+
50
2 (20)
96
𝑎(𝑥) =𝑥−𝑡ℎ𝑟𝑒𝑠
√2∙𝑛𝑜𝑖𝑠𝑒 (21)
where 50 is the maximum number of hits for one pixel at a certain charge injected.
Practically the noise is the width of the gaussian error function and the threshold is its mean value.
We can also create a distribution of noise and threshold values (expressed in electrons) calculated
for all the pixels in each sector. These distributions are approximately gaussian distributions. We
define the threshold and noise for a sector as the mean value of the distributions relative to each
sector. Their errors are calculated as the width of the distributions. This calculation for the errors
has been chosen in order to give a graphical idea of the noise and threshold dispersion for each
sector.
We want to do a Threshold Scan at different power supply voltages and we have to take into
account that the conversion factor DAC/𝑒− changes with the power supply. Hence, this factor has to
be recalculated for each power supply voltage. To do this, the SCANDACS file relative to
VPULSEL is used. The Threshold Scan procedure varies VPULSEL and sets VPULSEH=170 in
order to inject a charge in the interval 20-49 DACs. The effective charge injected is (170 −
𝑉𝑃𝑈𝐿𝑆𝐸𝐿) ∙ 𝐶𝑖𝑛𝑗. So, taking the SCANDACS file for VPULSEL, the macro creates a plot of
number of hits vs. injected charge as shown in fig. 98, and calculates the linear fit for the linear part
of the curve. From the slope of the line the conversion from DAC to volt can be obtained. Given the
fact that 1mV=1𝑒−, we also obtained the conversion factor from DAC to electrons.
Figure 99: Volt vs. DAC for the SCANDAC file of VPULSEL. The linear fit is the black line.
A graph similar to that in fig. 99 can be created for all the eleven DACs. The slope of the various
line changes if we change the supply voltage. Now, setting VDDD=VDDA=1.8 V (nominal) the
DACs are set to the values shown in table 14.
The second line of table 14 can be obtained from the relative SCANDAC files.
At a voltage different from 1.8V, the aim is to set all DACs in order to obtain the same
current/voltage values as in the case with VDDA=VDDD=1.8 V.
97
VAUX VRESET VCASN VCASP VPULSEL VPULSEH IRESET IBIAS IDB ITHR
117 117 57 86 50 255 50 64 64 51
1.20V 1.23V 0.41V 0.61V 0.73V 1.74V 4.98 pA 19.3nA 9.92nA 0.49nA
Table 14: Nominal values for the DACs at 1.8V. The first line contain the values expressed in DAC units.
We want to see the changes in the threshold and noise analysis with and without the correct DAC
settings. First, we do the Threshold Scan at different voltages without setting the DACs correctly
for each voltage (i.e. without considering the change of the conversion factor DAC/Volt); then in a
second time we perform the same analysis setting the DACs correctly (following the change of the
conversion factor DAC/Volt). Results of graphs Threshold vs. Voltage and Noise vs. Voltage are
shown in the following.
NOISE OCCUPANCY SCAN PROCEDURE
For the noise occupancy scan the following parameters have been chosen:
one million of events
VCASN fixed at 57 DAC (if we don’t set the DACs correctly this is true; otherwise it is set
independently for all the voltages, in order to have the same voltage as in the case
VDDD=VDDA=1.8 V).
ITHR ranges in the interval [20, 50] DACs in steps of ten.
At the end of the scan, a file NoiseOccupancyScan.dat is created with a line format as described in
paragraph 4.6. The so-called noise occupancy is calculated as follows:
𝑛𝑜𝑖𝑠𝑒𝑜𝑐𝑐 =𝐴𝑁𝐻
(1024∙512)∙1000000 (22)
where ANH (All Noise Hits) is the total number of hits due to noise in all the pixel matrix,
1024*512 represents the total number of pixels in the chip matrix and 1000000 is the number of
events. So the noise occupancy is a noise (or fake hit rate) per pixel and per event.
The absolute error on the noise occupancy value is assumed as a Poisson-error:
𝐸𝑟𝑟𝑜𝑟 =√𝐴𝑁𝐻
(1024∙512)∙1000000 (23)
At the end a graph containing the Noise Occupancy vs. ITHR is created.
As we have said, the DAC settings change with the various voltages (digital and analogue) and so,
also this time, we want to do the scan with and without the right settings for all the DACs. The aim
is to see the resulting graphs in the two cases at different voltages.
98
4.8.2 Experimental results on VDDD variation
In this paragraph the threshold and noise results will be presented and commented.
THRESHOLDS VARYING VDDD (VDDA = 1.8 V)
In fig. 100 the threshold as a function of the digital voltage is shown with and without the correct
DAC settings for all the sectors. The threshold trend is expected to be flat in both cases, in fact the
results confirm this for all sectors. This means that the VDDD variation doesn’t influence the DAC
settings. We can also see that the thresholds for all sectors are compatible within the error bars at a
given digital voltage.
Figure 100: Thresholds vs. VDDD without (top) and with (bottom) the right DAC settings.
NOISE VARYING VDDD (VDDA = 1.8 V)
In fig. 101 the noise as a function of the digital voltage is shown with and without the correct DAC
settings for all the sectors. The noise trend is expected to be flat in both cases, in fact the graphs
confirm this for all sectors. This means that the VDDD variation doesn’t influence the DAC settings
as said in the previous point. We can also see that the noise values for sectors 1, 2 and 4 are
compatible considering the error bars at a given digital voltage. Sector number 3 isn’t as noisy as
the other sectors in both graphs.
w/o DACs set w/ DACs set
99
Figure 101: Noise vs. VDDD without (top) and with (bottom) the right DAC settings.
NOISE OCCUPANCY VARYING VDDD (VDDA = 1.8V)
In fig. 102 the noise occupancy as a function of ITHR (at different VDDD) is shown with and
without the correct DAC settings for all the sectors. The noise occupancy decreases increasing
ITHR. In fact with the increase of ITHR the pixel thresholds increase leading to a lower noise
occupancy. There aren’t big differences in the trend of the two graphs; it means, again, that the
VDDD changes don’t influence the DAC settings. In fact in the top graph we can see that 57 DACs
correspond always to 0.41 V even if the correct DAC settings have not been used. The noise
w/o DACs set
w/ DACs set
100
occupancy at different digital supply voltages changes little; only at VDDD=1.98 V and with the
correct DAC settings the noise occupancy is slightly bigger compared with the other digital
voltages. Finally, it is possible to notice that the noise occupancy is always below ~10−7.
Figure 102: Noise Occupancy vs. ITHR at different analogue voltages without (top) and with (bottom) the
right DAC settings.
w/o DACs set
w/ DACs set
101
4.8.3 Experimental results on VDDA variation
THRESHOLDS VARYING VDDA (VDDD = 1.8 V)
In fig. 103 the threshold as a function of VDDA is shown with and without the correct DAC
settings for all the sectors. The threshold trend is expected to be flat also with the variation of
VDDA. We can see that the trend is flatter with the right DAC settings. This means that the VDDA
variation influences the DAC settings. We can also see that the thresholds for all sectors are
compatible within the error bars at a given analogue voltage.
Figure 103: Thresholds vs. VDDA without (top) and with (bottom) the right DAC settings.
w/o DACs set
w/ DACs set
102
NOISE VARYING VDDA (VDDD = 1.8 V)
In fig. 104 the noise as a function of VDDA is shown with and without the correct DAC settings for
all the sectors. The noise trend, again, is expected to be flat in both cases and for all sectors. As we
can see the trend is flat for all sectors without the correct DAC settings; instead, in the case with the
correct DAC settings, the noise is slightly higher and shows a decrease with increasing VDDA for
sector 1 and 2. This is due to the VPULSEH DAC setting since VPULSEH saturates at 1.57 V
instead of reaching the expected value of 1.74 V at VDDA ≥ 1.62 V. This phenomenon occurs only
for cases at VDDA<1.8V.
We can also see that the noise values for sectors 1, 2 and 4 are compatible considering the error bars
at a given analogue voltage. Sector 3 isn’t as noisy as the other sectors in both graphs.
Figure 104: Noise vs. VDDA without (top) and with (bottom) the right DAC settings.
w/o DACs set
w/ DACs set
103
NOISE OCCUPANCY VARYING VDDA (VDDD = 1.8 V)
In fig. 105 the noise occupancy as a function of ITHR (at different VDDA) is shown with and
without the correct DAC settings for all the sectors. There are big differences in the noise
occupancy values obtained in the two graphs; it means that VDDA strongly influences the DAC
settings. In fact in the top graph we can see that 57 DACs don’t correspond always to 0.41V
(reference case for VDDA=VDDD=1.8V) and the noise occupancy is different considering the
various VDDA at a given ITHR; this is due to the fact that we do not vary only ITHR but also
VCASN (it is not always set to 0.41V). With the increase of VCASN, the pixel thresholds decrease.
Considering the bottom graph we can notice that, with the correct DAC settings, adjusting VCASN
for each voltage we always obtain 0.41V in the conversion. The noise occupancy at different
VDDA is almost the same, having correctly set the DACs.
Finally, it is possible to notice that the noise occupancy is always below ~10−7.
Figure 105: Noise Occupancy vs. ITHR at different analogue voltages without (top) and with (bottom) the
right DAC settings.
w/o DACs set
w/ DACs set
104
4.8.4 Experimental results on VDDD&VDDA variation
THRESHOLDS VARYING VDDD&VDDA
In fig. 106 the threshold as a function of VDDD&VDDA is shown with and without the correct
DAC settings for all the sectors. In this case, the same conclusions explained with the variation of
VDDA are valid. In fact, we have already said that VDDD changes don’t influence the DAC
settings; thus the case VDDD&VDDA produces the same effects on threshold values already seen
with the variation of VDDA.
Figure 106: Thresholds vs. VDDD&VDDA without (top) and with (bottom) the right DAC settings.
w/o DACs set
w/ DACs set
105
NOISE VARYING VDDD&VDDA
In fig. 107 the noise as a function of VDDD&VDDA is shown with and without the correct DAC
settings for all the sectors. In this case, the same conclusions explained with the variation of VDDA
are valid. For the same motivations already explained, the case VDDD&VDDA produces the same
effects on noise values already seen with the variation of VDDA.
Figure 107: Noise vs. VDDD&VDDA without (top) and with (bottom) the right DAC settings.
w/o DACs set
w/ DACs set
106
NOISE OCCUPANCY VARYING VDDD&VDDA
In fig. 108 the noise occupancy as a function of ITHR (at different VDDD&VDDA) is shown with
and without the correct DAC settings for all the sectors. In this case, the same conclusions
explained with the variation of VDDA are valid. For the same motivations already explained, the
case VDDD&VDDA produces the same effects on noise occupancy values already seen with the
variation of VDDA.
Figure 108: Noise Occupancy vs. ITHR at different analogue and digital voltages without (top) and with
(bottom) the right DAC settings.
w/o DACs set
w/ DACs set
107
4.8.5 General conclusions for all cases
In the three cases analyzed we have seen:
The variation of VDDD doesn’t influence the DAC settings.
The variation of VDDA strongly influences the DAC settings.
The variation of VDDD&VDDA produces the same effects of the case with the variation of
VDDA.
This detailed analysis has permitted us to comprehend different aspects of the chip, in particular
related to the DAC configurations and the effects due to the variation of the two different supply
voltages.
4.9 Second test: Threshold and Noise Occupancy Scan without the
decoupling capacitors on DVDD
The second test on the chip pALPIDE-v1 consists in the analysis of the thresholds, noise and noise
occupancy varying only VDDD, after removing the decoupling capacitors on the digital voltage
(DVDD). The objective of this test is to see what is the effect of capacitors on threshold, noise and
noise occupancy values.
4.9.1 Test procedure
First of all the decoupling capacitors have to be removed from the carrier board of the chip. These
are the capacitors C6, C8, C10, C12, C14, C17, C21, C26, C27, C29, C30, C32, C33, C35, C36 and
C38 with a capacitance of 100 nF as shown in fig. 109. These capacitors have the function to filter
any alternate component on the digital supply voltage (DVDD).
Figure 109: Decoupling capacitors on DVDD (carrier board schematic).
After having removed the capacitors we make a Threshold and Noise Occupancy Scan varying only
VDDD (because the capacitors removed are only on the digital voltage) and then see if there are
some differences with respect to the case with all the capacitors. We have done the different
measurements with the correct DAC settings for each digital voltage.
108
We make the Threshold Scan injecting a charge ranging in the interval [10, 50] DACs to avoid bias
in the calculation of the noise with the S-curve.
4.9.2 Experimental results
THRESHOLDS VARYING VDDD (VDDA = 1.8 V)
In fig. 110 the threshold as a function of VDDD is shown in the left graph with the correct DAC
settings for all the sectors and without the decoupling capacitors on DVDD. In the graph on the
right we show the data taken with the decoupling capacitors on the digital voltage, to make a
comparison between the two cases. Comparing the two graphs we don’t see any differences in the
threshold trend and values. This means that the decoupling capacitors on DVDD don’t influence the
thresholds of the pixels.
Figure 110: Threshold vs. VDDD with the correct DAC settings without the decoupling capacitors on
DVDD (left). On the right, the graph taken from the bottom of fig. 100 to make a comparison in the analysis
of the effects of the removal of the decoupling capacitors on DVDD.
NOISE VARYING VDDD (VDDA = 1.8 V)
In fig. 111 the noise as a function of VDDD is shown in the left graph with the correct DAC
settings for all the sectors and without the decoupling capacitors on DVDD. In the graph on the
right we show for comparison the data taken with the decoupling capacitors on the digital voltage,
we don’t see any differences in the noise trend and values between the two graphs. This means that
the decoupling capacitors on DVDD don’t influence the noise of the pixels.
w/ DACs set
No C6÷C38
109
Figure 111: Noise vs. VDDD with the correct DAC settings, without the decoupling capacitors on DVDD
(left). On the right, the graph taken from the bottom of fig. 101 to make a comparison in the analysis of the
effects of the removal of the decoupling capacitors on DVDD.
NOISE OCCUPANCY VARYING VDDD (VDDA = 1.8 V)
In fig. 112 the noise occupancy as a function of ITHR (at different VDDD) is shown in the left
graph with the correct DAC settings for all the sectors and without the decoupling capacitors on
DVDD. In the graph on the right we show for comparison the data taken with the decoupling
capacitors on the digital voltage. We don’t see any differences in the noise occupancy trend and
values between the two graphs. This means that the decoupling capacitors on DVDD don’t
influence the noise occupancy of the pixels.
Figure 112: Noise Occupancy vs. ITHR at different digital voltages with the correct DAC settings and
without the decoupling capacitors on DVDD (left). On the right, the graph taken from the bottom of fig. 102
to make a comparison in the analysis of the effects of the removal of the decoupling capacitors on DVDD.
w/ DACs set
No C6÷C38
w/ DACs set
No C6÷C38
110
4.9.3 Second test conclusions
In this test we have seen that the removal of the decoupling capacitors on DVDD doesn’t produce
any effects on thresholds, noise and noise occupancy values. This is important because it will be
possible to avoid to mount them on the board and lately, on the modules with the FPCs.
4.10 Third test: Threshold and Noise Occupancy Scan without the
decoupling capacitors on AVDD
The third test on the chip pALPIDE-v1 consists in the analysis of the thresholds, noise and noise
occupancy varying only VDDA, but this time the decoupling capacitors on the analogue voltage
(AVDD) will be removed. The objective of this test is to see if there are some differences in the
threshold, noise and noise occupancy values due to the removal of these capacitors.
4.10.1 Test procedure
First of all the decoupling capacitors have to be removed from the carrier board of the chip. These
are the capacitors C16, C19, C24, C28, C31, C34, C37 and C39 with a capacitance of 100 nF as
shown in fig. 113. These capacitors have the function to remove any alternate component on the
analogue voltage (AVDD).
Figure 113: Decoupling capacitors on AVDD (carrier board schematic).
Without these capacitors on the carrier board we want to make a Threshold and Noise Occupancy
Scan varying only VDDA (because the capacitors removed are only on the analogue voltage) and
then see if there are some differences with respect to the case with all the capacitors. We have done
the different measurements with the correct DAC settings for each analogue voltage.
We make the Threshold Scan injecting a charge ranging in the interval [10, 50] DACs to avoid bias
in the calculation of the noise with the S-curve.
111
4.10.2 Experimental results
THRESHOLDS VARYING VDDA (VDDD = 1.8 V)
In fig. 114 the threshold as a function of VDDA is shown in the left graph with the correct DAC
settings for all the sectors and without the decoupling capacitors on AVDD. In the graph on the
right we show for comparison the data taken with the decoupling capacitors on the analogue
voltage. We don’t see any big difference in the threshold trend and values between the two graphs.
Only at 1.98V there is a visible decrease of the threshold for sector 1 and 2 with respect to the
values in the graph on the right. In any case, it is possible to conclude that the decoupling capacitors
on AVDD don’t influence the thresholds of the pixels, considering the big error bars in the graphs.
Figure 114: Threshold vs. VDDA with the correct DAC settings, without the decoupling capacitors on
AVDD (left). On the right, the graph taken from the bottom of fig. 103 to make a comparison in the analysis
of the effects of the removal of the decoupling capacitors on AVDD.
NOISE VARYING VDDA (VDDD = 1.8 V)
In fig. 115 the noise as a function of VDDA is shown in the left graph with the correct DAC
settings for all the sectors and without the decoupling capacitors on AVDD. In the graph on the
right we show for comparison the data taken with the decoupling capacitors on the analogue
voltage. We don’t see any differences in the noise trend and values between the two graphs. This
means that the decoupling capacitors on AVDD don’t influence the noise of the pixels.
w/ DACs set
No C16÷C39
112
Figure 115: Noise vs. VDDA with the correct DAC settings, without the decoupling capacitors on AVDD
(left). On the right, the graph taken from the bottom of fig. 104 to make a comparison in the analysis of the
effects of the removal of the decoupling capacitors on AVDD.
NOISE OCCUPANCY VARYING VDDA (VDDD = 1.8 V)
In fig. 116 the noise occupancy as a function of ITHR (at different VDDA) is shown in the left
graph with the correct DAC settings for all the sectors and without the decoupling capacitors on
AVDD. In the graph on the right we show for comparison the data taken with the decoupling
capacitors on the analogue voltage. We don’t see any differences in the noise occupancy trend and
values between the two graphs. This means that the decoupling capacitors on AVDD don’t
influence the noise occupancy of the pixels.
Figure 116: Noise Occupancy vs. ITHR at different analogue voltages with the right DAC settings and
without the decoupling capacitors on AVDD (left). On the right, the graph taken from the bottom of fig. 105
to make a comparison in the analysis of the effects of the removal of the decoupling capacitors on AVDD.
w/ DACs set
No C16÷C39
w/ DACs set
No C16÷C39
113
4.10.3 Third test conclusions
In this test we have seen that the removal of the decoupling capacitors on AVDD doesn’t produce
any effects on thresholds, noise and noise occupancy values considering the error bars (especially
for threshold values at VDDA=1.98V). This is important because it is possible to avoid their
mounting, reducing the power consumption of the module.
4.11 Fourth test: Threshold and Noise Occupancy Scan without the
filter on VREF
The fourth test on the chip pALPIDE-v1 consists in the analysis of the thresholds, noise and noise
occupancy varying only VDDA, after removing the filter on VREF. The objective of this test is to
see if there are some differences in the threshold, noise and noise occupancy values due to the
removal of this filter. As said in paragraph 4.1, VREF represents an external low-noise voltage
reference for the DACs, nominally set at the potential of the analog supply.
4.11.1 Test procedure
First of all the filter on VREF and two decoupling capacitors have to be removed from the carrier
board of the chip. The decoupling capacitors C40 and C41 together with the inductance L1 and the
potentiometer P1 have been removed making a direct connection with AVDD as shown in fig. 117.
The inductance L1 acts as a filter on VREF.
Figure 117: Decoupling capacitors C40-C41, inductance L1, potentiometer P1 removal on VREF (left,
carrier board schematic). The final result in on the right.
Without these capacitors, inductance and potentiometer on the carrier board, we want to make a
Threshold and Noise Occupancy Scan varying only VDDA (because the components removed are
only on the analogue voltage) and then see if there are some differences with respect to the case
with the filter on VREF. We have always done the different measurements with the correct DAC
settings for each analogue voltage.
We make the Threshold Scan injecting a charge ranging in the interval [10, 50] DACs to avoid bias
in the calculation of the noise with the S-curve.
114
4.11.2 Experimental results
THRESHOLDS VARYING VDDA (VDDD = 1.8 V)
In fig. 118 the threshold as a function of VDDA is shown in the left graph with the correct DAC
settings for all the sectors and without the filter on VREF. In the graph on the right we show for
comparison the data taken with the decoupling capacitors on the analogue voltage and the filter on
VREF. Comparing the two graphs we don’t see many differences. Only at 1.98V there is a visible
decrease of the threshold for sector 1 and 2 without the filter on VREF. But, this effect had already
been seen in the third test (fig. 114) and therefore it is due to the absence of the decoupling
capacitors on AVDD. It is possible to conclude that the removal of the filter on VREF doesn’t
produce changes to threshold values.
Figure 118: Threshold vs. VDDA with the correct DAC settings, without the filter on VREF (left). On the
right, the graph taken from the bottom of fig. 103 to make a comparison in the analysis of the effects of the
removal of the filter on VREF.
NOISE VARYING VDDA (VDDD = 1.8 V)
In fig. 119 the noise as a function of VDDA is shown in the left graph with the correct DAC
settings for all the sectors and without the filter on VREF. In the graph on the right we show for
comparison the data taken with the decoupling capacitors on the analogue voltage and the filter on
VREF. We don’t see any difference comparing the two graphs. It is possible to conclude that the
removal of the filter on VREF doesn’t produce changes to noise values.
w/ DACs set
w/o VREF filter
115
Figure 119: Noise vs. VDDA with the correct DAC settings, without the filter on VREF (left). On the right,
the graph taken from the bottom of fig. 104 to make a comparison in the analysis of the effects of the
removal of the filter on VREF.
NOISE OCCUPANCY VARYING VDDA (VDDD = 1.8 V)
In fig. 120 the noise occupancy as a function of ITHR (at different VDDA) is shown in the left
graph with the correct DAC settings for all the sectors and without the filter on VREF. In the graph
on the right we show for comparison the data taken with the decoupling capacitors on the analogue
voltage and the filter on VREF. We don’t see any difference between the two graphs. We can see
that the noise occupancy remains below ~10−7. It is possible to conclude that the removal of the
filter on VREF doesn’t produce changes to noise occupancy values.
Figure 120: Noise Occupancy vs. VDDA with the correct DAC settings, without the filter on VREF (left).
On the right, the graph taken from the bottom of fig. 105 to make a comparison in the analysis of the effects
of the removal of the filter on VREF.
w/ DACs set
w/o VREF filter
w/ DACs set
w/o VREF filter
116
4.11.3 Fourth test conclusions
We have seen that filter on VREF hasn’t any effect on threshold, noise and noise occupancy trend
and values. Due to this fact, it is possible to avoid to mount it reducing the power consumption,
simplifying the carrier board and the FPC.
117
Chapter 5: Study of the chip response as a function of
noise injection
5.1 Noise injection into pALPIDEfs-v1 In this paragraph, the studies on the stability of the power supply voltage by means of the noise
injection into the chip power planes is presented and discussed. First of all the objectives of the test
and its experimental setup are explained and then the results are shown and commented.
5.1.1 Test objective
The objective of this test is to evaluate the stability of the chip response in presence of fluctuations
of the power voltage levels. This is done by means the injection of a sinusoidal signal (noise
injection) in the chip power planes. We want to inject this signal at different frequencies in the
digital and analogue voltages, and in the PWELL (≡ back-bias) input situated on the DAQ board
(see fig. 97, this input is called “reverse bias”).
5.1.2 Experimental setup and test procedure
In fig. 121, the experimental setup adopted for this test is depicted. A sinusoidal signal with a
frequency in the range 100𝐻𝑧 ÷ 100𝑀𝐻𝑧 is generated with a pattern generator (Tektronix AFG
3252). The signal is injected in the analogue (AVDD) and digital (DVDD) voltage, and in the
PWELL (back bias) input on the DAQ board. The digital and analogue power voltage are both set
to 1.8 V (nominal).
Figure 121: Experimental setup for the noise injection. The dotted lines indicate the injection is performed
separately on AVDD, DVDD and PWELL.
In fig. 122 we can see the three different injection points. As it is possible to see, there is a small
capacitor connected in series to the LEMO to filter the direct component of the signal coming from
the pattern generator. This capacitor has been connected for the first two cases in fig. 122 (AVDD
and DVDD).
118
Figure 122: Enlargement of the injection points.
For the noise injection into PWELL a direct (normal) LEMO cable from the pattern generator to the
DAQ board has been used, because we have removed the resistances R15÷R22 on PWELL ground
(see fig. 123) and thus, the introduction of a capacitor stops the current path and the chip can’t work
correctly. The LEMO ground on the DAQ board has been used as PWELL ground. Without noise
injection we have to put a “cap” (resistance) of 0 Ω in the PWELL LEMO on the DAQ board.
Figure 123: Resistances on the PWELL ground (AGND) (carrier board schematic).
The objective is to inject this signal separately for the three input points and then for each
frequency. Then making a Threshold Scan, calculate the noise as already explained for the previous
tests. This noise will be compared to the noise calculated without noise injection and at
VDDD=VDDA=1.8V. With these two noise values we calculate the “excess noise” as follows:
𝐸𝑥𝑐𝑒𝑠𝑠 𝑛𝑜𝑖𝑠𝑒 = 𝑁𝑜𝑖𝑠𝑒 𝑤𝑖𝑡ℎ 𝑖𝑛𝑗 − 𝑁𝑜𝑖𝑠𝑒 𝑤𝑖𝑡ℎ𝑜𝑢𝑡 𝑖𝑛𝑗𝑒𝑐𝑡𝑖𝑜𝑛 (24)
We also calculate the ratio between these two noise values:
𝑁𝑜𝑖𝑠𝑒 𝑅𝑎𝑡𝑖𝑜 =𝑁𝑜𝑖𝑠𝑒 𝑤𝑖𝑡ℎ 𝑖𝑛𝑗
𝑁𝑜𝑖𝑠𝑒 𝑤𝑖𝑡ℎ𝑜𝑢𝑡 𝑖𝑛𝑗𝑒𝑐𝑡𝑖𝑜𝑛 (25)
The final aim is to see if there is a resonance in the noise value, injecting a particular frequency
signal.
The readout speed of the priority encoder is 20 MHz, therefore it is not desirable to have a
resonance at this particular frequency.
5.1.3 Experimental results: injection in the power planes (AVDD and DVDD)
In this paragraph the experimental results concerning the noise injection in the analogue and digital
(AVDD and DVDD, respectively) power supply planes are presented.
119
INJECTION IN DVDD
In fig. 124 we can see the excess noise as a function of the frequency relative to the noise injection
in the digital power supply plane. A sinusoidal signal with an amplitude of about 200 mV and a
frequency in the range 1𝑘𝐻𝑧 ÷ 100𝑀𝐻𝑧 has been injected. We don’t see any resonance in the
graph (the excess noise is always compatible with zero); this means that DVDD is not sensible to
the external noise injected. We can notice that, especially at 1 MHz, the excess noise is a little bit
negative because it is calculated statistically; thus a little fluctuation is acceptable.
Figure 124: Excess noise vs. Frequency relative to the noise injection in the digital power supply plane.
Figure 125: Noise Ratio vs. Frequency relative to the noise injection in the digital power supply plane.
120
In fig. 125 it is possible to see the Noise Ratio vs. Frequency. Here, the same conclusions of the
previous graph can be made. In fact, apart from little fluctuations, the Noise Ratio is always
compatible with one.
INJECTION IN AVDD
In fig. 126 we can see the excess noise as a function of the frequency relative to the noise injection
in the analog power supply plane. A sinusoidal signal with an amplitude of about 200 mV and a
frequency in the range 1𝑘𝐻𝑧 ÷ 100𝑀𝐻𝑧 has been injected. We can see that in the frequency range
~100 ÷ 350 𝑘𝐻𝑧 we have a very small peak (< 1 𝑒−). This means that the analog power plane is
not very sensible to the external noise injected (the small peak is practically negligible). We can
notice that for certain frequencies the excess noise is a little bit negative because of the same
reasons already explained in the previous paragraph.
In fig. 127 it is possible to see the Noise Ratio vs. Frequency. The noise ratio is about 1.13 in the
worst case (sector 3). Thus, the same conclusions of the previous graph can be made.
Figure 126: Excess noise vs. Frequency relative to the noise injection in the analog power supply plane.
121
Figure 127: Noise Ratio vs. Frequency relative to the noise injection in the analog power supply plane.
5.1.4 Experimental results: injection in the PWELL (back-bias)
For PWELL we have performed a more detailed analysis discovering some interesting aspects. We
start with some details on the injected signal in the back-bias and then the results on the noise
injection are presented.
ANTENNA EFFECT
During the analysis we have found a background signal in the noise injection. In fig. 128 this signal
is shown; the image has been obtained with the oscilloscope using a probe at high impedance (10
MΩ). This signal presents frequency-modulation and is probably due to the FM radio signals. Its
amplitude is not constant, reaching a maximum of 70 mV and a frequency of about 100 MHz.
Figure 128: Background signal at about 100 MHz (maximum amplitude 70 mV) due to the FM radio.
122
This signal represents a background for our measurements; thus we have performed a Threshold
Scan with a LEMO cable connected to the PWELL on the DAQ board. The cable has cap of 0 Ω at
one extremity in order to study the antenna effect (see fig. 129). Calculating the noise for each
sector of the pixel matrix in this configuration, we have seen a noise increase lower than 1𝑒−.
Nevertheless these noise values (one per sector) represent a better reference for the calculation of
the excess noise and the noise ratio.
Clearly, this background is not removable without using a Faraday cage. For the noise
measurements in the following, we haven’t used a Faraday cage.
Figure 129: Setup for the study of the antenna effect.
SIGNAL ATTENUATION
We have measured the real injected signal amplitude seen by the chip with a probe (high
impedance, 10 MΩ), varying the frequency of the signal itself. The injected signal (generated by the
pattern generator) has an amplitude of about 200 mV. The result can be seen in fig. 130. As we can
see the signal is more and more attenuated increasing its frequency. This interesting behavior is due
to a low-pass filter inside the chip: the detector itself represents a capacitor and there are some
parasitic resistances in the substrate that create a sort of a filter for the injected signal. This
phenomenon happens only with the signal injection in PWELL (back-bias). Signal attenuation is
present even injecting a signal with a lower amplitude (20 mV); in this case the attenuation is less
evident and this amplitude is lower than the one of the FM radio signal. Nevertheless, this signal
with a small amplitude creates a sensible excess noise in the chip that we have measured.
123
Figure 130: Real injected signal amplitude seen by the chip as a function of the frequency of the signal
itself. The injected signal has an amplitude of 200 mV. The x axis is referred to the red points. The blue
dashed line represents the fixed background due to the FM radio.
NOISE INJECTION RESULTS. SIGNAL AMPLITUDE: 200 mV
In fig. 131 the excess noise as a function of the frequency is plotted in the case with the noise
injection in PWELL (back-bias). In this case a sinusoidal signal with an amplitude of about 200 mV
has been injected. The injected signal follows the attenuation curve seen in fig. 130. As we can see,
in fig. 131 there is a big resonance in the frequency range ~500𝐻𝑧 ÷ 1 𝑀𝐻𝑧 where some data on
the graph has not been plotted. This is because, in the frequency range ~2 ÷ 600 𝑘𝐻𝑧, the S-curve
fit for the noise calculation has a very bad chi-square distribution (evaluated for all the pixels in the
matrix). In fact, in the mentioned frequency range, the S-curve of each pixel has not the expected
behavior (fig. 98) because the chip doesn’t work correctly.
We can also notice that at a low frequency (100 Hz – 1 kHz) the excess noise fluctuates
(considering the four sectors) in the range ~22 ÷ 33 𝑒− and, at higher frequency, this range is
significantly lower (~5 ÷ 10 𝑒−). This phenomenon is due to the signal attenuation seen in fig. 130.
124
Figure 131: Excess noise vs. Frequency relative to the noise injection in the back-bias (signal amplitude:
200mV).
Figure 132: Noise Ratio vs. Frequency relative to the noise injection in the back-bias (signal amplitude: 200
mV).
In fig. 132 we can see the noise ratio as a function of the frequency of the injected signal. The same
observations can be made noting that the noise calculated with the injection is eight times greater
than the one calculated without noise injection in the worst case (sector 3).
NOISE INJECTION RESULTS. SIGNAL AMPLITUDE: 20 mV
In fig. 133 the excess noise as a function of the frequency is plotted in the case with the noise
injection in PWELL (back-bias). In this case a sinusoidal signal with an amplitude of about 20 mV
125
has been injected. The signal injected follows an attenuation curve similar to the one seen in fig.
130 but, as said, the measured attenuation is not so stronger. As we can see, in fig. 133 there is a
non-working zone in the frequency range ~800𝐻𝑧 ÷ 750 𝑘𝐻𝑧 where some data on the graph has
not been plotted. We have this gap for the same reasons explained for the previous plots. The
frequency range of the non-working region, as expected, is smaller than the one with an injected
signal of 200 mV. Here the excess noise before and after the resonance is very similar because, as
mentioned before, the signal attenuation is not so strong in this case.
Figure 133: Excess noise vs. Frequency relative to the noise injection in the back-bias (signal amplitude: 20
mV)
Figure 134: Noise Ratio vs. Frequency relative to the noise injection in the back-bias (signal amplitude: 20
mV)
126
In fig. 134 we can see the noise ratio as a function of the frequency of the injected signal. The same
observations can be made, noting that the noise calculated with the injection is about 1.8 times
greater than the one calculated without noise injection in the worst case (sector 3).
ANALYSIS FOR THE CHIP SECTORS
As we can see from fig. 131, 132, 133 and 134; sectors 3 and 4 are more sensible to the noise
injection with respect to the other two sectors. It is due to the intrinsic characteristics of the sectors.
From table 15 we can notice the characteristics of the pixel charge collection electrode in the
different sectors. Sector 4 has a larger pwell opening and foot-print (see paragraph 4.4) and this
could be the reasons for the higher noise sensibility. Instead, sector 3 has the same characteristics of
sector 2 except for the reset mode; the diode reset could be the cause for the higher sensibility to the
external noise. Sector 1 and 2 have a smaller collection electrode, thus they have a lower sensibility
to the noise injected.
Table 15: Charge collection electrode parameters for each sector of the pixel chip.
5.2 Noise injection into pALPIDEfs-v2 In this section, the main characteristics of pALPIDEfs-v2 will be described. We will figure out the
main differences between the chip version 1 and 2 and then, we will present the results about the
noise injection into pALPIDEfs-v2.
5.2.1 Comparison between ALPIDE-v1 and -v2
The chip pALPIDEfs-v2 is the second full scale prototype of the ALPIDE family. Its main new
feature is represented by the presence of the I/O interface with which is possible to transmit the data
out of the chip. The nominal high-speed output link at 1.2 Gb/s has been replaced by a 40Mb/s one
on this version. The chip pALPIDEfs-v2 can be seen wire-bonded to the carrier board in fig. 135.
Near the connector we can see all the logic with the LVDS drivers for the I/O interface.
With respect to pALPIDE-v1, the second version allows full integration in IB and OB Module.
In table 16 we can see the comparison between the charge collection electrode characteristics in
each sector of pALPIDE-v1 and -v2. The specifications of the v2 sectors are important for the
following tests, because we also want to make a comparison between the various sectors of the
pixel matrix of pALPIDEfs-v2.
127
Figure 135: Chip pALPIDEfs-v2 soldered to the carrier board. The chip is under the black protection for the
wire bonds.
Table 16: Charge collection electrode main specifications for each sector of pALPIDE-v1 and -v2.
5.2.2 Experimental results: injection in the power planes (AVDD and DVDD)
In this paragraph the experimental results concerning the noise injection in the analogue and digital
power planes are presented. The experimental setup and steps for the noise injection are the same
described for pALPIDEfs-v1; in fact, also for this chip, we have removed all the decoupling
capacitors on AVDD and DVDD situated on the carrier board. This operation permits to inject an
external noise in the chip power supply planes and in PWELL (back-bias)
INJECTION IN DVDD (ALPIDE-2)
In fig. 136 we can see the excess noise as a function of the frequency relative to the noise injection
in the digital power supply plane of pALPIDE-2 (left). A sinusoidal signal with an amplitude of
about 200 mV and a frequency in the range 1𝑘𝐻𝑧 ÷ 100𝑀𝐻𝑧 has been injected. We don’t see any
resonance in the graph; this means that DVDD is not sensible to the external noise injected.
128
Figure 136: Excess noise vs. Frequency relative to the noise injection in the digital power supply plane of
pALPIDE-2 (left). On the right the same results obtained for pALPIDE-1.
Making a comparison between the two graphs in fig. 136, we don’t see any difference in the trend.
This lead to the conclusion that the digital power supply plane has the same behavior for both
pALPIDE-1 and -2 (the response to the noise injection is equivalent).
In fig. 137 it is possible to see the Noise Ratio vs. Frequency for pALPIDE-2 (left). Here, the same
conclusions of the previous graph can be made. In fact, apart from little fluctuations, the Noise
Ratio is always compatible with one also for pALPIDE-2.
Figure 137: Noise Ratio vs. Frequency relative to the noise injection in the digital power supply plane of
pALPIDE-2 (left). On the right the same results obtained for pALPIDE-1.
ALPIDE-2
129
INJECTION IN AVDD (ALPIDE-2)
In fig. 138 we can see the excess noise as a function of the frequency relative to the noise injection
in the analog power supply plane of pALPIDE-2 (left). A sinusoidal signal with an amplitude of
about 200 mV and a frequency in the range 1𝑘𝐻𝑧 ÷ 100𝑀𝐻𝑧 has been injected. We can see that in
the frequency range ~50 ÷ 350 𝑘𝐻𝑧 we have a resonance peak with a maximum of about 2𝑒− in
the worst case (sector 4). This peak seems to be split in two peaks because at 200 kHz the excess
noise is significantly lower. This result lead to the conclusion that the analog power plane has not a
very high sensibility to the external noise injected.
Making a comparison between the two graphs in fig. 138, we can see that the resonance peak is
higher for pALPIDE-2 even if its frequency range is smaller. We can conclude that the AVDD of
pALPIDE-2 is more sensible to the external noise but, this sensibility is limited to a smaller
frequency range.
Figure 138: Excess noise vs. Frequency relative to the noise injection in the analog power supply plane of
pALPIDE-2 (left). On the right the same results obtained for pALPIDE-1.
In fig. 139 it is possible to see the Noise Ratio vs. Frequency for pALPIDE-2 (left). The noise ratio
is about 1.43 in the worst case (sector 4). Thus, the same conclusions said before can be made also
in this case.
ALPIDE-2
130
Figure 139: Noise Ratio vs. Frequency relative to the noise injection in the analog power supply plane of
pALPIDE-2 (left). On the right the same results obtained for pALPIDE-1.
5.2.3 Experimental results: injection in the PWELL (back-bias)
In this section we present the results about the noise injection in the back-bias of pALPIDEfs-v2.
Also with this chip, we have a signal attenuation as verified for pALPIDEfs-v1. The antenna effect
due to the FM radio is also present in this case and represents the reference measurement for the
calculation of the noise without injection.
As seen in the previous plots, the noise calculated without noise injection differs a maximus of 3
electrons considering the four chip sectors. Since the silicon detector represents a capacitor of about
5 fF, the voltage that corresponds to a charge of 3 electrons is:
𝑉 =𝑄
𝐶=
3∙1.6∙10−19 𝐶
5∙10−15𝐹= 0.96 ∙ 10−4 𝑉 ≅ 100 𝜇𝑉 (26)
If the chip substrate is sensible to 100 µV, an injected sinusoidal signal with an amplitude of 1 mV
is already much greater than the expected noise. In the following there are the results of the noise
injection into PWELL (substrate) with a signal amplitude of 1, 2 and 20 mV.
NOISE INJECTION RESULTS. SIGNAL AMPLITUDE: 1 and 2 mV
In fig. 140 we can see the noise ratio as a function of the frequency of an injected signal with an
amplitude of 1 mV. We can notice that the trend is flat; this means that the chip can work correctly
even with an injected noise much greater than the expected one (100 µV). The additional noise
measured (Noise Ratio > 1) is due to the connection of the pattern generator to PWELL (DAQ
board).
ALPIDE-2
131
Figure 140: Noise Ratio vs. Frequency relative to the noise injection in PWELL of pALPIDE-2. Injected
signal amplitude equal to 1 mV.
In fig. 141 one can see the Noise Ratio vs. Frequency obtained with an injected signal amplitude of
2 mV. The data has been taken only at particular frequencies where we knew that some resonance
effects could appear (from the measurements performed on ALPIDE-1). In fact a small resonance
effect begins to appear around 40 kHz. In any case both 1 and 2 mV are much greater than the
expected noise, but the chip can work even with this big noise injected.
Figure 141: Noise Ratio vs. Frequency relative to the noise injection in PWELL of pALPIDE-2. Injected
signal amplitude equal to 2 mV.
132
NOISE INJECTION RESULTS. SIGNAL AMPLITUDE: 20 mV
In fig. 142 the excess noise as a function of the frequency is plotted in the case with the noise
injection in PWELL (back-bias) of pALPIDE-2 (left). In this case a sinusoidal signal with an
amplitude of 20 mV has been injected. The signal injected follows an attenuation curve similar to
the one seen for pALPIDE-1. As we can see, in fig. 142 there is a region in which the chip does not
respond correctly resembling a resonance in the excess noise in the frequency range ~1 ÷ 500 𝑘𝐻𝑧
(the resonance can be imagined with a peak greater than 45 𝑒−). The shape of this pseudo-
resonance is asymmetric having a steep falling edge.
Making a comparison with the same plot obtained for pALPIDE-1 (fig. 142 right), we can notice
that the resonance frequency range of pALPIDE-2 is smaller. The v2 chip is sensible to the injected
noise in a smaller interval of frequencies where one can see better the resonance peak (the
Threshold Scan works correctly for a bigger number of frequencies).
Figure 142: Excess noise vs. Frequency relative to the noise injection in the back-bias of pALPIDE-2 (left)
with a signal amplitude of 20 mV. On the right the same plot obtained for pALPIDE-1.
In fig. 143 we can see the noise ratio as a function of the frequency of the injected signal into
pALPIDE-2 (left). The same observations can be made, noting that the noise calculated with the
injection is about 9 times greater than the one calculated without noise injection in the worst case
(sector 4).
ALPIDE-2
133
Figure 143: Noise Ratio vs. Frequency relative to the noise injection in the back-bias of pALPIDE-2 (left)
with a signal amplitude of 20 mV. On the right the same plot obtained for pALPIDE-1.
SECTORS ANALYSIS
As we can see from the previous plots relative to the noise injection in DVDD, AVDD and PWELL
of pALPIDE-2; the sectors have a more uniform behavior with respect to the ones of pALPIDE-1.
Nevertheless, sector 4 of pALPIDE-2 is more sensible to the noise injected because it has a diode
reset mode. This reset mode caused the same effect for sector 3 of pALPIDE-1.
In summary, sectors 3 and 4 of pALPIDE-2 have a higher sensibility to the injected noise because
of the larger charge collection electrode spacing of their pixels (see table 16) but, the noise
sensibility with respect to sectors 1 and 2 is slightly greater. Instead, for pALPIDE-1 this difference
was more evident.
ALPIDE-2
134
Conclusions
This thesis has had the aim to perform some electronics studies on the various components of the
new ITS, such as the flex circuits and the new monolithic pixel sensors optimized for the ITS.
Concerning the test performed on the FPCs, we have seen that is possible to transmit a differential
signal with data rates equal to the reference ones (1.2 Gb/s and 400 Mb/s for the inner and middle-
outer layers, respectively) through a FPC line of different lengths. We haven’t found any problems
in terms of attenuation (eye diagrams very open) and jitter on the signal itself. Clearly, especially
with high data rates, a small signal attenuation and jitter have been observed in the samples, but
they have always been under the limits imposed for the test. This results are in agreement with the
specifications and simulations performed for the new ITS.
Concerning the tests performed on the chip pALPIDEfs-v1, we have verified the stability of the
chip response after the supply voltage changes with respect the nominal voltage of 1.8 V. This
stability has been verified measuring the threshold, noise and noise occupancy for each sector of the
pixel matrix, and analyzing more than 10% of the pixels. Then, we have proven that the absence of
the decoupling capacitors on the power supply voltages doesn’t produce any modifications in the
threshold, noise and noise occupancy trend and values. The same thing is also valid for a filter we
had on the reference voltage for the DAC inside the chip. This results have permitted to conclude
that we can avoid to mount these decoupling capacitors on the FPC when we will assemble the
modules for the new ITS.
The last test we have performed was the noise injection, with a sinusoidal signal with a frequency in
the range 100𝐻𝑧 ÷ 100𝑀𝐻𝑧, into pALPIDEfs-v1 and pALPIDEfs-v2 chips. This study is aimed at
verifying if there are some particular frequencies that can produce resonances in the noise curve for
power supply planes and chip substrate. The results have led to the discovery that the digital supply
plane is not sensible to an external noise with an amplitude of 200 mV; instead, the analog supply
plane has a very small sensibility considering that an injected noise of 200 mV is much greater than
1 mV. On the other hand, the back-bias of the two chips is very sensible to the noise injection. In
fact, a resonance effect begins to appear already between 1 and 2 mV at 40 kHz. These signal
amplitudes represent noise values much greater than the expected one on PWELL (only ~100 𝜇𝑉)
but, we have verified that chip works correctly even with this great injected noise. Chip malfunction
begins only at 5 mV (≫ 100 𝜇𝑉) around tens of kHz. The two chip versions presented almost the
same characteristics in terms of sensibility to the noise injected. Nevertheless, the resonances seen
in these cases are very far from 20 MHz that is the priority encoder readout speed. All these results
are in agreement with the chip specifications.
135
Future plans
The results of the tests on the FPC for the OB stave have allowed the INFN Torino team to validate
the circuits and data transmission protocols developed for the OB Stave.
The tests on pALPIDE-v1 and -v2 have allowed to study the response of the chip in case of
fluctuations on the power supply voltage (digital and analogue). Furthermore, the results on the
noise injection have permitted to understand that the PWELL (back-bias) is more sensible to the
noise with respect to the analog and digital power supply planes; a new result discovered from my
experimental measurements. It is important to remember that the ALPIDE chips will be probably
mounted in the new ITS due to their excellent characteristics in terms of readout speed and power
consumption. So, the tests performed are directly linked to the validation of these prototypes for the
ITS upgrade detectors.
All the results obtained from the tests on the FPCs and the chips have been presented and approved
in the various weekly meetings (WP5 and WP10) at CERN, Geneva.
As future plans we foresee to repeat the same tests also on the chips pALPIDE-v3 and -v4 thanks to
the experience developed on the versions 1 and 2. Furthermore, we will test the OB modules with
the chips soldered to the FPC, analyzing the transmission of the signals generated by the chips.
Finally, my activities will also be devoted to the development of some test procedures to validate
the final prototypes and to the production and test of a significant fraction of OB Stave.
In the first half of 2016 the chip design will be frozen as well as the layout of the FPC and of the
other different components.
In the second half the ALICE ITS upgrade project will enter in the pre-production phase.
136
Ringraziamenti
Per il sostegno, le idee, l’aiuto, la disponibilità ringrazio la mia Famiglia, Chiara Liambo, amici e
parenti, Stefania Beolè, Yasser Corrales Morales, Franco Benotto, Markus Keil, Miljenko Šuljić,
Paolo De Remigis, Florea Dumitrache, Barbara Pini, Luciano Ramello, Alessandra Lattuca, Andrea
Vastola, Federico Siviero.
137
Bibliography
1) Edward V. Shuryak. “Quantum Chromodynamics and the theory of superdense matter”. In:
Physics Reports 61.2 (1980), pp. 71-158. ISSN: 0370-1573.
2) Frithjof Karsch. “Lattice results on QCD thermodynamics”. In: Nucl. Phys. A698 (2002), pp.
199-208.
3) O.W. Greenberg and C.A. Nelson. “Color models of hadrons”. In: Physics Reports 32.2 (1977),
pp. 69-121. ISSN: 0370-1573.
4) Antonio Pich. “Aspects of quantum chromodynamics”. In: (1999).
5) M. E. Peskin and D. V. Schroeder. An Introduction to Quantum Field Theory. Reading: Addison-
Wesley, 1995.
6) Michael J. Fromerth and Johann Rafelski. “Hadronization of the quark universe”. In: (2002).
7) A. Chodos et al. “New extended model of hadrons”. In: Phys. Rev. D9 (12 June 1974), pp. 3471-
3495.
8) Frithjof Karsch. “Properties of the Quark Gluon Plasma: A lattice perspective”. In: Nucl. Phys.
A783 (2007), pp 13-22.
9) Tetsuo Hatsuda and Teiji Kunihiro. “QCD phenomenology based on a chiral effective
Lagrangian”. In: Physics Reports 247.5-6 (1994), pp. 221-367. ISSN: 0370-1573.
10) Prashant Shukla. “Glauber model for heavy ion collissions from low energies to high energies”.
In: (2001).
11) Cheuk-Yin Wong. Introduction to high-energy heavy ions collision. World Scientific, 1994.
12) ALICE Collaboration. “Centrality Dependence of the Charged-Particle Multiplicity Density at
Midrapidity in Pb-Pb Collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉”. In: Phys. Rev. Lett. 106 (3 Jan. 2011), p.
032301.
13) ALICE Collaboration. “Charged-Particle Multiplicity Density at Midrapidity in Central Pb-Pb
Collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉”. In: Phys. Rev. Lett. 105 (25 Dec. 2010), p. 252301.
14) Serguei Chatrchyan et al. “Dependence on pseudorapidity and centrality of charged hadron
production in PbPb collisions at a nucleon-nucleon centre-of-mass energy of 2.76 TeV”. In: JHEP
1108 (2011), p. 141.
15) ATLAS Collaboration. “Measurement of the centrality dependence of the charged particle
pseudorapidity distribution in lead-lead collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉 with the ATLAS detector”.
In: Physics Letters B 710.3 (2012), pp. 363-382. ISSN: 0370-2693.
16) STAR Collaboration. “Systematic measurements of identified particle spectra in pp, d+Au, and
Au+Au collisions at the STAR detector”. In: Phys. Rev. C79 (3 Mar. 2009), p. 034909.
138
17) ALICE Collaboration. “Pion, Kaon, and Proton Production in Central Pb-Pb Collisions at
√𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉”. In: Phys. Rev. Lett. 109 (25 Dec. 2012), p. 252301.
18) PHENIX Collaboration. “Identified charged particle spectra and yields in Au+Au collisions at
√𝑠𝑁𝑁 = 200 𝐺𝑒𝑉”. In: Phys. Rev. C 69 (3 Mar. 2004), p 034909.
19) Chun Shen et al. “Radial and elliptic flow in Pb+Pb collisions at energies available at the CERN
Large Hadron Collider from viscous hydrodynamics”. In: Phys. Rev. C84 (4 Oct. 2011), p 044903.
20) Yu A Karpenko and Yu M Sinyukov. “Femtoscopic scales in central A+A collisions at RHIC
and LHC energies in a hydrokinetic model”. In: Journal of Physics G: Nuclear and Particle Physics
38.12 (2011), p. 124059.
21) Piotr Bozek. “Hydrodynamic flow from RHIC to LHC”. In: Acta Phys. Polon. B43 (2012), p.
689.
22) Maciej Rybczynski, Wojciech Florkowski, and Wojciech Broniowski. “Single-freeze-out model
for ultra relativistic heavy-ion collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉 and the LHC proton puzzle”. In:
Phys. Rev. C85 (2012), p. 054907.
23) S. Voloshin and Y. Zhang. “Flow study in relativistic nuclear collisions by Fourier expansion of
Azimuthal particle distributions”. In: Z. Phys. C70 (1996), pp. 665-672.
24) Various notes from lessons at the University of Turin (Physics Department).
25) STAR Collaboration. “Azimuthal anisotropy in Au-Au collisions at √𝑠𝑁𝑁 = 200 𝐺𝑒𝑉”. In:
Phys. Rev. C72 (1 July 2005), p 014904.
26) PHENIX Collaboration. “Elliptic Flow of Identified Hadrons in Au+Au Collisions at √𝑠𝑁𝑁 =
200 𝐺𝑒𝑉”. In: Phys. Rev. Lett. 91 (18 Oct. 2003), p. 182301.
27) You Zhou. “Anisotropic flow of identified particles in Pb-Pb collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉
with the ALICE detector”. In: J. Phys. Conf. Ser. 509 (2014), p. 012029.
28) Ulrich Heinz, Chun Shen, and Huichao Song. “The viscosity of quark-gluon plasma at RHIC
and the LHC”. In: AIP Conf. Proc. 1441 (2012), pp. 766-770.
29) Betty Abelev et al. “Centrality Dependence of Charged Particle Production at Large Transverse
Momentum in Pb-Pb Collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉”. In: Phys. Lett. B720 (2013), pp. 52-62.
30) STAR Collaboration. “Transverse-Momentum and Collision-Energy Dependence of High-pT
Hadron Suppression in Au+Au Collisions at Ultrarelativistic Energies”. In: Phys. Rev. Lett. 91 (17
Oct. 2003), p. 172302.
31) PHENIX Collaboration. “High-pT charged hadron suppression in Au+Au collisions at √𝑠𝑁𝑁 =
200 𝐺𝑒𝑉”. In: Phys. Rev. C 69 (3 Mar. 2004), p 034910.
32) Serguei Chatrchyan et al. “Study of high-pT charged particle suppression in PbPb compared to
pp collisions at √𝑠𝑁𝑁 = 2.76 𝑇𝑒𝑉”. In: Eur. Phys. J. C 72 (2012), p. 1945.
139
33) ALICE Collaboration. “The ALICE experiment at the CERN LHC”. In: Journal of
instrumentation 3.08 (2008), S08002.
34) L. Musa and K. Safarik. Letter of Intent for the Upgrade of the ALICE Experiment. Tech. rep.
CERN-LHCC-2012-012. LHCC-I-022. Geneva: CERN, Aug. 2012.
35) ALICE Collaboration. Technical Design Report for the Upgrade of the ALICE Inner Tracking
System. Tech. rep. CERN-LHCC-2013-024. ALICE-TDR-017. Geneva: CERN, Nov. 2013.
36) ALICE Collaboration (Felix Reidt). “Upgrade of the ALICE Inner Tracking System”. arXiv:
1411.1802v [physics.ins-det]. Geneva: CERN, 7 Nov. 2014.
37) G. Aglieri et al (Topical Workshop on Electronics for Particle Physics). “Monolithic active
pixel sensor development for the upgrade of the ALICE inner tracking system”. In: Jinst 8 C12041
(2013).
38) F. Morel et al (Topical Workshop on Electronics for Particle Physics). “MISTRAL &
ASTRAL: two CMOS Pixel Sensor architectures suited to the Inner Tracking System of the ALICE
experiment”. In: Jinst 9 C01026 (2014).
39) Ilaria Aimo. “Studies on monolithic active pixel sensors for the Inner Tracking System upgrade
of ALICE experiment”. PhD thesis. Turin: 21 Nov. 2014.
40) Jacobus van Hoorne. “The upgrade of the ALICE Inner Tracking System – Status of the R&D
on monolithic silicon pixel sensors”. In: PoS - Technology and Instrumentation in Particle Physics
(June 2014).
41) ON Semiconductor (www.onsemi.com) – Application Note. “Understanding Data Eye Diagram
Methodology for Analyzing High Speed Digital Signals”. AND9075/D.
42) LeCroy. LeCroy Jitter Seminar.
43) ALICE ITS ALPIDE development team. pALPIDEfs datasheet version 1.0b. 19 Mar. 2013.
44) P. Yang et al. “Low-power Priority Address-Encoder and Reset-Decoder Data-driven Readout
for Monolithic Active Pixel Sensors for tracker system”.
45) Markus Keil. pALPIDEfs software – Installation and command line interface. Rev. 1, August 7,
2014.
46) Latest informations for the ITS Upgrade: http://aliweb.cern.ch (ITS plenary meetings – WP5
and WP10 meetings). Nuc. Phys. A (22 Jan. 2015).