Transcript
Page 1: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

SOLUTIONS MANUAL FOR

by

Random Phenomena:Fundamentals and

EngineeringApplications of

Probability and Statistics

Babatunde A. Ogunnaike

CRC Press is an imprint of theTaylor & Francis Group, an informa business

Boca Raton London New York

Page 2: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

CRC PressTaylor & Francis Group6000 Broken Sound Parkway NW, Suite 300Boca Raton, FL 33487-2742

© 2011 by Taylor and Francis Group, LLCCRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed in the United States of America on acid-free paper10 9 8 7 6 5 4 3 2 1

International Standard Book Number: 978-1-4398-2026-1 (Paperback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site athttp://www.taylorandfrancis.com

and the CRC Press Web site athttp://www.crcpress.com

Page 3: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 1

Exercises

Section 1.11.1 From the yield data in Table 1.1 in the text, and using the given expression,we obtain

s2A = 2.05

s2B = 7.64

from where we observe that s2A is greater than s2

B .

1.2 A table of values for di is easily generated; the histogram along with sum-mary statistics obtained using MINITAB is shown in the Figure below.

9630-3

Median

Mean

4.54.03.53.02.52.0

1st Q uartile 1.0978

Median 2.8916

3rd Q uartile 5.2501

Maximum 9.1111

2.1032 3.9903

1.8908 4.2991

2.7733 4.1371

A -Squared 0.27

P-V alue 0.653

Mean 3.0467

StDev 3.3200

V ariance 11.0221

Skewness -0.188360

Kurtosis -0.456418

N 50

Minimum -5.1712

A nderson-Darling Normality Test

95% C onfidence Interv al for Mean

95% C onfidence Interv al for Median

95% C onfidence Interv al for StDev95% Confidence Intervals

Summary for d

Figure 1.1: Histogram for d = YA − YB data with superimposed theoretical distribution

1

Page 4: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 1.

From the data, the arithmetic average, d, is obtained as

d = 3.05 (1.1)

And now, that this average is positive, not zero, suggests the possibility thatYA may be greater than YB . However conclusive evidence requires a measure ofintrinsic variability.

1.3 Directly from the data in Table 1.1 in the text, we obtain yA = 75.52; yB =72.47; and s2

A = 2.05; s2B = 7.64. Also directly from the table of differences, di,

generated for Exercise 1.2, we obtain: d = 3.05; however s2d = 11.02, not 9.71.

Thus, even though for the means,

d = yA − yB

for the variances,s2

d 6= s2A + s2

B

The reason for this discrepancy is that for the variance equality to hold, YA

must be completely independent of YB so that the covariance between YA andYB is precisely zero. While this may be true of the actual random variable, itis not always strictly the case with data. The more general expression which isvalid in all cases is as follows:

s2d = s2

A + s2B − 2sAB (1.2)

where sAB is the covariance between yA and yB (see Chapters 4 and 12). Inthis particular case, the covariance between the yA and yB data is computed as

sAB = −0.67

Observe that the value computed for s2d (11.02) is obtained by adding −2sAB

to s2A + s2

B , as in Eq (1.2).

Section 1.21.4 From the data in Table 1.2 in the text, s2

x = 1.2.

1.5 In this case, with x = 1.02, and variance, s2x = 1.2, even though the num-

bers are not exactly equal, within limits of random variation, they appear to beclose enough, suggesting the possibility that X may in fact be a Poisson randomvariable.

Section 1.31.6 The histograms obtained with bin sizes of 0.75, shown below, contain 10bins for YA versus 8 bins for the histogram of Fig 1.1 in the text, and 14 binsfor YB versus 11 bins in Fig 1.2 in the text. These new histograms show a bitmore detail but the general features displayed for the data sets are essentiallyunchanged. When the bin sizes are expanded to 2.0, things are slightly different,

Page 5: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

79.578.076.575.073.572.0

18

16

14

12

10

8

6

4

2

0

YA

Fre

qu

en

cy

Histogram of YA (Bin size 0.75)

78.076.575.073.572.070.569.067.5

6

5

4

3

2

1

0

YB

Fre

qu

en

cy

Histogram of YB (Bin size 0.75)

Figure 1.2: Histogram for YA, YB data with small bin size (0.75)

8078767472

25

20

15

10

5

0

YA

Fre

qu

en

cy

Histogram of YA (Bin size 2.0)

79777573716967

14

12

10

8

6

4

2

0

YB

Fre

qu

en

cy

Histogram of YB(Bin Size 2.0)

Figure 1.3: Histogram for YA, YB data with larger bin size (2.0)

Page 6: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 1.

as shown below. These histograms now contain fewer bins (5 for YA and 7 forYB); and, hence in general, show less of the true character of the data sets.

1.7 The values computed from the data for yA and sA imply that the intervalof interest, yA ± 1.96sA, is 75.52 ± 2.81, or (72.71, 78.33). From the frequencydistribution of Table 1.3 in the text, 48 of the 50 points lie in this range, theexcluded points being (i) the single point in the 71.51–72.50 bin and (ii) thesingle point in the 78.51–79.50 bin. Thus, this interval contains 96% of the data.

1.8 For the YB data, the interval of interest, yB ± 1.96sB , is 72.47 ± 5.41, or(67.06, 77.88). From Table 1.4 in the text,we see that approximately 48 of the50 points lie in this range (excluding the 2 points in the 77.51–78.50 bin). Thus,this interval also contains approximately 96% of the data.

1.9 From Table 1.4 in the text, we observe that the relative frequency associatedwith x = 4 is 0.033; that associated with x = 5 is 0.017 and 0 thereafter. Theimplication is that the relative frequency associated with x > 3 = 0.050. Hence,the value of x such that only 5% of the data exceeds this value is x = 3.

1.10 Using µ = 75.52 and σ = 1.43, the theoretical values computed for thefunction in Eq 1.3 in the text, (for y = 72, 73, . . . , 79) are shown in the tablebelow along with the the corresponding relative frequency values from Table 1.3in the text.

Theoretical RelativeYA Group y f(y) Frequency71.51-72.50 72 0.014 0.0272.51-73.50 73 0.059 0.0473.51-74.50 74 0.159 0.1874.51-75.50 75 0.261 0.3475.51-76.50 76 0.264 0.1476.51-77.50 77 0.163 0.1677.51-78.50 78 0.062 0.1078.51-79.50 79 0.014 0.02

TOTAL 50 0.996 1.00

The agreement between the theoretical values and the relative frequency is rea-sonable but not perfect.

1.11 This time time with µ = 72.47 and σ = 2.76 and for y = 67, 68, 69, . . . , 79,we obtain the table shown below for the YB data (along with the the corre-sponding relative frequency values from Table 1.4 in the text).

Page 7: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

Theoretical RelativeYB Group y f(y) Frequency66.51-67.50 67 0.020 0.0267.51-68.50 68 0.039 0.0668.51-69.50 69 0.066 0.0869.51-70.50 70 0.097 0.1670.51-71.50 71 0.125 0.0471.51-72.50 72 0.142 0.1472.51-73.50 73 0.142 0.0873.51-74.50 74 0.124 0.1274.51-75.50 75 0.095 0.1075.51-76.50 76 0.064 0.1276.51-77.50 77 0.038 0.0077.51-78.50 78 0.019 0.0478.51-79.50 79 0.009 0.00

TOTAL 50 0.980 1.00

There is reasonable agreement between the theoretical values and the relativefrequency.

1.12 Using λ = 1.02, the theoretical values of the function f(x|λ) of Eq 1.4in the text at x = 0, 1, 2, . . . 6 are shown in the table below along with thecorresponding relative frequency values from Table 1.5 in the text.

Theoretical RelativeX f(x|λ = 1.02) Frequency0 0.3606 0.3671 0.3678 0.3832 0.1876 0.1833 0.0638 0.0174 0.0163 0.0335 0.0033 0.0176 0.0006 0.000

TOTAL 1.0000 1.000

The agreement between the theoretical f(x) and the data relative frequency isreasonable. (This pdf was plotted in Fig 1.6 of the text.)

Application Problems

1.13 (i) The following is one way to generate a frequency distribution for thisdata:

Page 8: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 1.

RelativeX Frequency Frequency

1.00-3.00 4 0.0473.01-5.00 9 0.1065.01-7.00 11 0.1297.01-9.00 20 0.2359.01-11.00 10 0.11811.01-13.00 9 0.10613.01-15.00 3 0.03515.01-17.00 6 0.07017.01-19.00 6 0.07019.01-21.00 5 0.05921.01-23.00 1 0.01223.01-25.00 1 0.012

TOTAL 85 0.999

The histogram resulting from this frequency distribution is shown below wherewe observe that it is skewed to the right. Superimposed on the histogram is atheoretical gamma distribution, which fits the data quite well. The variable inquestion, time-to-publication, is (a) non-negative, (b) continuous, and (c) hasthe potential to be a large number (if a paper goes through several revisionsbefore it is finally accepted, or if the reviewers are tardy in completing theirreviews in the first place). It is therefore not surprising that the histogram willbe skewed to the right as shown.

24201612840

20

15

10

5

0

x

Fre

qu

en

cy

Shape 3.577

Scale 2.830

N 85

Histogram of xGamma

Figure 1.4: Histogram for time-to-publication data

(ii) From this frequency distribution and the histogram, we see that the “mostpopular” time-to-publication is in the range from 7-9 months (centered at 8months); from the relative frequency values, we note that 41/85 or 0.482 is the

Page 9: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

7

fraction of the papers that took longer than this to publish.

1.14 (i) A plot of the histogram for the 20-sample averages, yi, generated asprescribed is shown in the top panel of the figure below. We note the narrowerrange occupied by this data set as well as its more symmetric nature. (Super-imposed on this histogram is a theoretical normal distribution distribution.)(ii) A histogram of the average of averages, zi, is shown in the bottom panel ofthe figure. The “averaging” significantly narrows the range of the data and alsomakes the data set somewhat more symmetric.

12.011.511.010.510.09.59.08.5

12

10

8

6

4

2

0

y

Fre

qu

en

cy

Mean 10.12

StDev 0.8088

N 85

Histogram of yNormal

12.011.410.810.29.69.0

20

15

10

5

0

z

Fre

qu

en

cy

Mean 10.38

StDev 0.7148

N 85

Histogram of zNormal

Figure 1.5: Histogram for time-to-publication data

1.15 (i) Average number of safety incidents per month, x = 0.500; the associatedvariance, s2 = 0.511. The frequency table is shown below:

Page 10: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

8 CHAPTER 1.

RelativeX Frequency Frequency0 30 0.6251 12 0.2502 6 0.1253 0 0.000

TOTAL 48 1.000

The resulting histogram is shown below.

210

30

25

20

15

10

5

0

SafetyIncidents

Fre

qu

en

cy

Histogram of SafetyIncidents

Figure 1.6: Histogram for safety incidents data

(ii) It is reasonable to consider the relative frequency of occurrence of the safetyincidents as an acceptable measure of the “chances” of obtaining each indicatednumber of occurrences: since fr(0) = 0.625, fr(1) = 0.250, fr(2) = 0.125,fr(3) = 0.000 = fr(4) = fr(5), these may then be considered as reasonableestimates of the chances of observing the indicated occurrences.

(iii) From the postulated model:

f(x) =e−0.50.5x

x!

we obtain the following table which shows the theoretical probability of occur-rence side-by-side with the relative frequency data; it indicates that the modelactually fits the data quite well.

Page 11: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

9

Theoretical RelativeX Probability, f(x) Frequency0 0.607 0.6251 0.303 0.2502 0.076 0.1253 0.012 0.0004 0.002 0.0005 0.000 0.000

TOTAL 1.000 1.000

(iv) Assuming that this is a reasonable model, then we may use it to computethe “probability” of observing 1, 3, 2, 3 safety incidents (by pure chance alone)respectively over a period of 4 consecutive months. From the theoretical resultsin (iii) above, we note that the probability of observing 1 incident (by purechance alone) is a reasonable 0.303; for 2 incidents, the probability is 0.076;it appears that the probability of observing 3 incidents by pure chance aloneis rare: 0.012 or 1.2%. Observing another set of 3 incidents just two monthsafter observing the first set of 3 incidents seems to suggest that something moresystematic than pure chance alone might be responsible. However, these state-ments are not meant to be “definitive” or conclusive; they merely illustrateshow one may use this model to answer the posed question.

1.16 (i) The histograms for XB and XA are shown below, plotted side-by-sideand on the same x-axis scale. The histograms cover the same range (from about200 to about 360), and the frequencies are similar. Strictly on the basis of avisual inspection, therefore, it is difficult to say anything concrete about theeffectiveness of the weight-loss program. It is difficult to spot any real differencebetween the two histograms.

360320280240200

4

3

2

1

0

360320280240200

5

4

3

2

1

0

XB

Fre

qu

en

cy

XA

Histogram of XB, XA

Figure 1.7: Histograms for XB and XA

Page 12: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

10 CHAPTER 1.

(ii) The histogram of the difference variable, D = XB−XA, shown below, revealsthat this variable is not only positive, it actually ranges from about 2 to about 14lbs. Thus, strictly from a visual inspection of this histogram, it seems obviousthat the weight-loss program is effective. The implication of this histogramof the “before”-minus-“after” weight difference is that the “after” weight isconsistently lower than the “before” weight (hence the difference variable thatis consistently positive). However, this is not obvious from the raw data sets.

141062

10

8

6

4

2

0

D

Fre

qu

en

cy

Histogram of D

Figure 1.8: Histograms for the difference variable D = XB −XA

1.17 The relative frequency table is shown below (obtained by dividing thesupplied absolute frequency data by 100, the total number of patients). Theresulting frequency distribution plots are also shown below.

x frOfrY

0 0.32 0.081 0.41 0.252 0.21 0.353 0.05 0.234 0.01 0.085 0.00 0.01

The average number of live births per delivered pregnancy is determined asfollows: for the older group,

xO =Total no of live birthsTotal no of patients

=(0× 32) + (1× 41) + (2× 21) + (3× 5) + (4× 1)

100= 1.02

and in a similar fashion, for the younger group,

xY =201.08100

= 2.01

Page 13: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

11

543210

40

30

20

10

0

x

Fre

qu

en

cy

Freq Distribution: y_O

543210

40

30

20

10

0

x

Fre

qu

en

cy

Freq Distribution: y_Y

Figure 1.9: Frequency distribution plots for IVF data

Page 14: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

12 CHAPTER 1.

The values computed for the average number of live births appears to behigher for the younger group than for the older group; and the frequency distri-butions also appear to be different in overall shape. Observe that the distribu-tion for the older group shows a peak at x = 1 while the peak for the youngergroup’s frequency distribution is located at x = 2; furthermore, the distribu-tion for the older group shows a much higher value for x = 0 than that for theyounger group. These data sets therefore seem to indicate that the outcomes ofthe IVF treatments are different for these two groups.

Page 15: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 2

Exercises

Section 2.12.1 There are several different ways to solve the equation:

τdC

dt= −C + C0δ(t) (2.1)

For example, by Laplace transforms, if the Laplace transform of the indicatedfunction is defined as

C(s) = LC(t)then by taking Laplace transforms of each term in this linear equation, oneimmediately obtains:

τsC(s) = −C(s) + C0

since Lδ(t) = 1. This algebraic equation in the variable s is easily solved forC(s) to obtain

C(s) =C0

τs + 1(2.2)

from where the inverse Laplace transform yields

C(t) =C0

τe−t/τ (2.3)

as expected.

2.2 In terms of the indicated scaled time variable, Eq (2.15) in the text may bewritten as:

F (t) = 1− e−t

and the required plot is shown below. The required percentage of dye moleculeswith age less than or equal to the mean residence time, τ , is obtained from Eq(2.15) when t = τ , or for t = 1 from above: i.e.,

F (1) = 1− e−1 = 0.632

1

Page 16: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 2.

1086420

1.0

0.8

0.6

0.4

0.2

0.0

t

F

~

Figure 2.1: Cumulative age distribution function as a function of scaled time variable t

so that 63.2% of the dye molecules have age less than or equal to τ .

2.3 Via direct integration by parts, letting u = θ and dv = e−θ/τdθ, one obtains:∫ ∞

0

θe−θ/τdθ =1τ

(−θτe−θ/τ

)∣∣∣∞

0+

∫ ∞

0

τe−θ/τdθ

=1τ

0 + τ(−τe−θ/τ |∞0 )

= τ (2.4)

as required.

Section 2.22.4 The plot of the two pdfs f(x) and f(y) are shown below.

1050-5-10

0.4

0.3

0.2

0.1

0.0

X

Density

x, Eq (2.24)

y, Eq (2.25)

f(x)

f(y)

Figure 2.2: Probability distribution functions for f(x) and f(y) in Problem 2.4

Page 17: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

Variable x, whose pdf is shown in the solid line, has a higher degree of un-certainty associated with the determination of any particular outcome. This isbecause the range of possible values for x is much broader than the range ofpossible values for y.

2.5 From the given pdf, we obtain

f(0) = (0.5)4 =116

f(1) = 4(0.5)4 =14

f(2) = 6(0.5)4 =38

f(3) = 4(0.5)4 =14

f(4) = (0.5)4 =116

Intuitively, one would expect that if the coin is truly fair, then the mostlikely outcome when a coin is tossed 4 times is 2 heads and 2 tails. Any sys-tematic deviation from this unbiased outcome will be less likely for a fair coin.The result of the computation is consistent with this intuitive notion of fairness.

Section 2.32.6 In tossing a fair coin once,

1. from the classical (a-priori) perspective, the probability of obtaining ahead is specified as 1/2 because, of the two mutually exclusive outcomes,(H, T ), one is favorable (H);

2. from the relative frequency (a-posteriori) perspective, if the experimentof tossing the fair coin once is repeated a total of nT times, and a head isobserved nH times, then the probability of obtaining a head is specifiedas nH

nT;

3. and from the subjective perspective, the presumption that the coin is fairimplies that there is no reason for a head to be observed preferentially overa tail; the probability of obtaining a head is therefore specified as 1/2.

Application Problems

2.7 (a) With two plug flow reactors in series, the overall residence time is ob-tained as a combination of the residence times in each reactor. First, by the plugflow assumption, θ1, the time for a dye molecule to traverse the entire length ofthe first reactor, is precisely l1A/F ; similarly, θ2, the time for a dye moleculeto traverse the second reactor, is precisely l2A/F . Therefore, the residence time

Page 18: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 2.

for the configuration involving these two reactors in series is given by:

θ = θ1 + θ2 =A(l1 + l2)

F(2.5)

(b) With two CSTR’s in series, the residence time distribution for the config-uration is related to the dye concentration out of the second reactor, which isinfluenced directly by the dye concentration from the first reactor. Upon theusual assumption of ideal mixing, we obtain the following model equations frommaterial balance when a bolus of dye, with concentration C0δ(t), is introducedinto the first reactor:

τ1dC1

dt= −C1 + C0δ(t) (2.6)

τ2dC2

dt= −C2 + C1 (2.7)

where τi = Vi/F ; i = 1, 2. This set of linear ordinary differential equations canbe solved for C2(t) many different ways (either as a consolidated second orderequation, obtained by differentiating the second equation and introducing thefirst for the resulting dC1/dt; or simultaneously, using matrix methods; or viaLaplace transforms, etc.) By whichever method, the result is:

C2(t) =(

C0

τ1 − τ2

) (e−t/τ1 − e−t/τ2

)(2.8)

And now, as in the main text, if we define f(θ) as the instantaneous fraction ofthe initial number of injected dye molecules exiting the reactor at time t = θ,i.e., C2(t)/C0, we obtain:

f(θ) =(

e−t/τ1 − e−t/τ2

τ1 − τ2

)(2.9)

as the required residence time distribution for this ensemble.(c) For a configuration with the PFR first and the CSTR second, if C0δ(t) isthe concentration at the inlet to the PFR, then C1(t) the concentration out ofthe PFR is given by:

C1(t) = C0δ(t− θ1) (2.10)

where θ1 = l1A/F is the residence time in the PFR, as obtained in (a) above.When this is introduced into Eq (2.7) above (the model for the second-in-lineCSTR), the result is

τ2dC2

dt= −C2 + C0δ(t− θ1) (2.11)

an equation best solved using Laplace transforms. Upon taking Laplace trans-forms, and rearranging, we obtain

C(s) =C0e

−θ1s

τ2s + 1(2.12)

Page 19: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

from where Laplace inversion yields

C2(t) =C0

τ2e−(t−θ1)/τ2 (2.13)

The desired residence time distribution is then obtained as:

f(θ) =1τ2

e−(θ−θ1)/τ2 (2.14)

2.8 Let E represent the “event” that the ship in question took evasive action,and C the “event” that the ship counterattacked; let H represent the “event”that the ship was hit. From the relative frequency perspective, and assumingthat the observed data set is large enough so that the relative frequencies withwhich events have occurred can be regarded as approximately equal to the trueprobabilities of occurrence, we are able to compute the required probabilitiesas follows. The probability that any attacked warship will be hit regardless oftactical response is obtained as:

P (H) =Total number of ships hit

Total number of ships attacked=

60 + 62365

= 0.334 (2.15)

Similarly, the probability that a ship taking evasive action is hit is given by:

P (HE) =Total number of ships taking evasive action that were hit

Total number of ships taking evasive action

=60180

= 0.333 (2.16)

Finally, the probability that a counterattacking ship is hit is given by:

P (HC) =Total number of counterattacking ships that were hit

Total number of counterattacking ships

=62185

= 0.335 (2.17)

We now observe that all three probabilities are about equal, indicating that interms of the current classification, the tactical response of the attacked shipdoes not much matter in determining whether the ship is hit or not.

2.9 (i) Assuming that past performance is indicative of future results, then fromthe relative frequency perspective, the probability of team A winning a genericgame will be given by

PG(A) =915

= 0.6

since team A won 9 out of the 15 games played. Similarly, the probability ofteam B winning a generic game will be given by:

PG(B) =1215

= 0.8

Page 20: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 2.

(ii) Assuming that the respective proportions of past wins are indicative of eachteam’s capabilities and remain unchanged when the two teams meet, then

P (A)P (B)

=0.60.8

=34

(2.18)

Now, along with the constraint,

P (A) + P (B) = 1 (2.19)

we have two equations to solve for the two unknown probabilities. First, fromEq (2.19), we have that

P (A)P (B)

+ 1 =1

P (B)

which, upon introducing Eq (2.18) yields

34

+ 1 =1

P (B)

from where the required probabilities are determined as:

P (B) =47; P (A) =

37

(2.20)

Page 21: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 3

Exercises

Section 3.13.1 (i) Experiment : Toss the two dice once, at the same time;Trial : A single toss of the two dice;Outcomes: nB , nW , respectively, the number showing on the black die, and onthe white die;Sample space: A set consisting of a total of 36 ordered pairs;

Ω = (i, j) : i = 1, 2, . . . , 6; j = 1, 2, . . . , 6

(ii) The simple events associated with the sum S = 7 are: E1 = (1, 6); E2 =(2, 5); E3 = (3, 4); E4 = (4, 3); E5 = (5, 2); E6 = (6, 1), a total of 6 distinct en-tries because the first entry, nB , is distinguishable from the second.

3.2 (i) A = (20, 0, 0); assuming that the complement of “approve” is “disap-prove”, then A∗ = (0, 20, 0)(ii) B = n1 > n0; B∗ = n1 < n0(iii) C = n2 > n1;(iv) D = n2 > 10

Section 3.23.3 From the given sets, obtain:

A ∪B = x : x = 0, 1, 2, 3, 4 . . .

andA ∩B = Φ; the null set

3.4

B =∞⋃

i=1

Ai = x : 0 ≤ x ≤ 1

1

Page 22: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 3.

3.5 Venn diagrams for the LHS and the RHS are acceptable (not shown). Al-ternatively, use algebra of sets as follows:(i) Let D = (A ∪B)∗.Then x ∈ D ⇒ x /∈ (A∪B) ⇒ x /∈ A and x /∈ B ⇒ x ∈ A∗ and x ∈ B∗, so thatx ∈ (A∗ ∩B∗), implying that D = (A∗ ∩B∗), as required.(ii) Let D = (A ∩B)∗.Then x ∈ D ⇒ x /∈ (A ∩ B), which implies that either x /∈ A or x /∈ B sothat x ∈ A∗ or x ∈ B∗, i.e., x ∈ (A∗ ∪ B∗), implying that D = (A∗ ∪ B∗), asrequired.(iii) Similarly, let D = A ∩ (B ∪ C).Then x ∈ D ⇒ x ∈ A and (x ∈ B or x ∈ C) ⇒ (x ∈ A and x ∈ B) or (x ∈A and x ∈ C), i.e., x ∈ (A∩B)∪ (A∩C), implying that D = (A∩B)∪ (A∩C),as required.(iv) Finally, let D = A ∪ (B ∩ C).Then x ∈ D ⇒ x ∈ A or (x ∈ B and x ∈ C) ⇒ (x ∈ A or x ∈ B) and (x ∈A or x ∈ C), i.e., x ∈ (A∪B)∩ (A∪C), implying that D = (A∪B)∩ (A∪C),as required.

3.6 Proceed by expressing the sets A and B in terms of disjoint sets as follows:

A = (A ∩B) ∪ (A ∩B∗)B = (B ∩A) ∪ (B ∩A∗)

from which we obtain:

P (A) = P (A ∩B) + P (A ∩B∗) ⇒ P (A ∩B∗) = P (A)− P (A ∩B)(3.1)P (B) = P (B ∩A) + P (B ∩A∗) ⇒ P (B ∩A∗) = P (B)− P (A ∩B)(3.2)

And now, the set A ∪B, in terms of a union of disjoint sets, is

A ∪B = (A ∩B) ∪ (A ∩B∗) ∪ (B ∩A∗)

so that:P (A ∪B) = P (A ∩B) + P (A ∩B∗) + P (B ∩A∗)

Now substitute Eqs (3.1) and (3.2) to obtain

P (A ∪B) = P (A) + P (B)− P (A ∩B)

as required.

3.7 Assuming that staff members are neither engineers nor statisticians, then thetotal number of engineers plus statisticians = 100− 25 = 75. This sum is madeup of those that are purely engineers (E), those that are purely statisticians (S)and those that are both engineers and statisticians (B) so that the total numberof engineers will be

E + B = 50

Page 23: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

the total number of statisticians will be

S + B = 40

butE + S + B = 75

hence obtainB = 15

A Venn diagram representing the supplied information is given below. From

Figure 3.1: Venn diagram for Problem 3.7

here, the required probability, P (B∗) is obtained as

P (B∗) = 1− 15100

= 0.85.

Section 3.33.8 From the supplied information, obtain

Q(A1) =3∑

x=0

(23

)(13

)x

=(

23

)(1 +

13

+132

+133

)

=8081

Similarly,

Q(A2) =∞∑

x=0

(23

) (13

)x

=(

23

) ∞∑x=0

(13

)x

)

=(

23

)(1

1− 13

)= 1

Page 24: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 3.

3.9 From the definitions of the sets, obtain

P (A) =∫ ∞

4

e−xdx = e−4

P (A∗) =∫ 4

0

e−xdx = 1− e−4

P (A ∪A∗) =∫ ∞

0

e−xdx = 1

Alternatively, directly from P (A) = e−4, obtain

P (A∗) = 1− P (A) = 1− e−4

and from the fact that the two sets A and A∗ are mutually exclusive and com-plementary, obtain the final result that P (A ∪A∗) = P (A) + P (A∗) = 1.

3.10 As in Problem 3.1, obtain the sample space as:

Ω = (i, j) : i = 1, 2, . . . , 6; j = 1, 2, . . . , 6a set with 36 elements. By assigning equal probability to each element in theset, determine the required probabilities as follows:(i) Since A = (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1), obtain

P (A) = 6/36 = 1/6

(ii) B = nB < nW is a set consisting of the 15 off-diagonal elements of the6× 6 array of the ordered pair (nB , nw) for which nB < nW ; hence,

P (B) = 15/36

(iii) B∗ = nB ≥ nW is the complementary set to B above, from which weimmediately obtain

P (B∗) = 1− 15/36 = 21/36

Alternatively, we may note that B∗ = nB ≥ nW is a set consisting of the 15diagonal elements for which nB > nW , in addition to the 6 diagonal elementsfor which nB = nW , yielding the same result.(iv) C = nB = nW is a set consisting of the 6 diagonal elements for whichnB = nW , so that

P (C) = 6/36 = 1/6

(v) D = nB + nW = 5 or 9 may be represented as a union of two disjointsubsets, D1 ∪D2 where D1 = nB + nW = 5 and D2 = nB + nW = 9. Morespecifically,

D1 = (1, 4), (2, 3), (3, 2), (4, 1) ⇒ P (D1) = 4/36

andD2 = (3, 6), (4, 5), (5, 4), (6, 3) ⇒ P (D2) = 4/36

Page 25: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

from where we obtain:

P (D) = P (D1) + P (D2) = 2/9

3.11 (i) Assuming that balls of the same color are indistinguishable, the samplespace is obtained as:

Ω = (R, G)i, i = 1, 2, . . . , 9; (R,R)i, i = 1, 2, 3; (G,G)i, i = 1, 2, 3

indicating 9 possible realizations of (R, G) outcomes; and 3 realizations each of(R,R) and (G,G) for a total of 15 elements. (Note that this is the same as

(62

),

the total number of ways of selecting two items from 6 when the order is notimportant.)(ii) Let SD be the event that the outcome consists of two balls of different colors.Upon assigning equal probability to each of the 15 elements in the sample space,we obtain the probability of drawing two balls of different colors as

P (SD) =915

= 0.6

(iii) Without loss of generality, let the balls be numbered R1, R2, R3, and G4, G5, G6;then under the indicated conditions, the sample space will consist of the follow-ing sets: SR the set of all red outcomes; SG the set of all greens, and SD, theset of different colored outcomes, i.e.,

Ω = SR, SG, SD

where SR = R1R2, R1R3, R2R1, R2R3, R3R1, R3R2, with a total of 6 ele-ments, since the numbered balls are now all distinguishable, so that the outcomeRiRj (indicating that the ball labeled Ri is drawn first, and the one labeled Rj

is drawn next) is different from RjRi.Similarly, SG = G1G2, G1G3, G2G1, G2G3, G3G1, G3G2. Finally, SD con-

tains 18 elements, 9 elements of the form RiGj ; i = 1, 2, 3; j = 1, 2, 3; andanother 9 of form GiRj ; i = 1, 2, 3; j = 1, 2, 3. Again, note that the total num-ber of elements, 30, is the same as the number of distinct permutations of 2items selected from 6 (when the order of selection is important).

Upon assigning equal probability to the 30 elements of Ω, we obtain therequired probability as:

P (SD) =1830

= 0.6

so that there is no difference between this result and the one obtained in (ii).(There are alternative means of obtaining this same result.)

3.12 (i) The random variable space is:

V = x : x = 0, 1, 2, 3, 4

Page 26: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 3.

(ii) The induced probability set function, PX(A) = P (ΓA), is, in this case:

PX(X = 0) = 9/13,

PX(X = 1) = 1/13,

PX(X = 2) = 1/13,

PX(X = 3) = 1/13,

PX(X = 4) = 1/13

(and similarly for the rest of the subsets.)

3.13 The sample space Ω is given by:

Ω = HHHH, HHHT, HHTH,HTHH, HHTT,HTHT, HTTH,HTTT,

THHH,THHT, THTH, TTHH, THTT, TTHT, TTTH, TTTTwith a total of 16 elements. By defining the random variable X as the numberof heads, upon assigning equal probabilities to each outcome, we obtain

VX = 0, 1, 2, 3, 4from which the following probabilities are easily obtained:

PX(0) = 1/16PX(1) = 4/16PX(2) = 6/16PX(3) = 4/16PX(4) = 1/16

From the provided distribution function, we obtain, for p = 12 ,

f(x) =4!

x!(4− x)!116

so that f(0) = 1/16; f(1) = 4/16; f(2) = 6/16; f(3) = 4/16 and f(4) = 1/16,precisely as obtained earlier.

3.14 Determine first the complementary probability p∗ that all k birthdays aredistinct so that no two are the same. The total number of possible combinationsof birthdays is (365)k, since any of the 365 days in the “normal” (as opposedto “leap”) year, 1989, can be the birthday for any of the k students in class.Of these, the number of “favorable” cases involves selecting exactly k distinctnumbers from 365 without repetition.

Since the number of distinct permutations of length k < n from n items isn!/(n− k)!, we have that

p∗ = (1− p) =365!

(365− k)!1

365k

Page 27: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

7

Sections 3.4 and 3.53.15 (i) P (A) = P (E1) + P (E2) = 0.11 + 0.20 = 0.31P (B) = P (E2) + P (E3) + P (E4) = 0.54P (C) = P (E5) + P (E6) = 0.35P (D) = P (E1) + P (E2) + P (E5) = 0.51

(ii) (A ∪B) = E1, E2, E3, E4 so that:

P (A ∪B) = P (E1) + P (E2) + P (E3) + P (E4) = 0.65

Similarly, P (A ∩B) = P (E2) = 0.2P (A ∪D) = P (D) = 0.51P (A ∩D) = P (A) = 0.31P (B ∪ C) = P (B) + P (C) = 0.89P (B ∩ C) = P (Φ) = 0.0

(iii) P (B|A) = P (B∩A)P (A) = 0.2/0.31 = 0.645

P (A|B) = P (A∩B)P (B) = 0.2/0.54 = 0.370

P (B|C) = P (B∩C)P (C) = 0

P (D|C) = P (D∩C)P (C) = 0.2/0.35 = 0.571

B and C are mutually exclusive because (B ∩ C) = Φ and P (B|C) = 0.

3.16 Let Bi indicate the event that the ith child is a boy; then, by independenceand equiprobability of these events, the probability of interest is obtained as:

P (B1, B2, B3) = P (B1)P (B2)P (B3) = (0.5)3 = 0.125

Now, under the stated conjecture, the required probability is P (B3 ∩ B1B2)given the following information:

P (B3|B1B2) = 0.8

By definition of conditional probabilities, we know that:

P (B3 ∩B1B2) = P (B3|B1B2)P (B1B2)

And now, since by independence and equiprobability, P (B1B2) = 0.52 = 0.25,we now obtain:

P (B3 ∩B1B2) = 0.8× 0.25 = 0.2

3.17 (i) If B attracts A, then

P (B|A) > P (B) (3.3)

By definition, P (B|A) = P (B∩A)P (A) which, when substituted into the LHS in Eq

(3.3) yieldsP (B ∩A) > P (A)P (B)

Page 28: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

8 CHAPTER 3.

And now, since P (B) > 0 and P (B ∩A) = P (A ∩B), we obtain

P (A ∩B)P (B)

> P (A) ⇒ P (A|B) > P (A)

as required.(ii) It is required to show that P (B∗|A) < P (B∗) follows from Eq (3.3) above.First, Eq (3.3) implies:

1− P (B|A) < P (B∗)

and, again, since P (B|A) = P (A∩B)P (A) , we have that

P (A)− P (A ∩B)P (A)

< P (B∗) (3.4)

Now, as a union of two disjoint sets,

A = (B∗ ∩A) ∪ (A ∩B)

so that,P (A) = P (B∗ ∩A) + P (A ∩B)

As a result, Eq (3.4) becomes

P (B∗ ∩A)P (A)

> P (B∗) ⇒ P (B∗|A) < P (B∗)

as required.

3.18 That A and B are independent implies

P (A|B) = P (A); P (B|A) = P (B)

from where we immediately obtain:

1− P (A|B) = P (A∗); or 1− P (A ∩B)P (A)

= P (A∗) (3.5)

And now, because

B = (A∗ ∩B) ∪ (A ∩B) ⇒ P (B) = P (A∗ ∩B) + P (A ∩B)

then Eq (3.5) becomes

P (A∗ ∩B)P (B)

= P (A∗); or P (A∗ ∩B) = P (A∗)P (B) (3.6)

(which implies, in its own right, that A∗ is independent of B).Now, because (A∗ ∩B∗) = (A ∪B)∗, so that

P (A∗ ∩B∗) = 1− P (A ∪B)

Page 29: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

9

and, in terms of a union of disjoint sets,

(A ∪B) = (A∗ ∩B) ∪A

so thatP (A ∪B) = P (A∗ ∩B) + P (A)

it follows therefore that

P (A∗ ∩B∗) = 1− P (A∗ ∩B)− P (A) = P (A∗)− P (A∗ ∩B)

Upon introducing Eq (3.6), we obtain,

P (A∗ ∩B∗) = P (A∗)− P (A∗)P (B) = P (A∗)[1− P (B)] = P (A∗)P (B∗)

as required.

3.19 By definition of conditional probabilities,

P (A ∩B|A ∪B) =P [(A ∩B) ∩ (A ∪B)]

P (A ∪B)(3.7)

The following identities will be useful:

(A ∩B) ∩ (A ∪B) = (A ∩B)(A ∩B) ∩A = (A ∩B)(A ∩B) ∩B = (A ∩B)

From here, first, Eq (3.7) becomes:

P (A ∩B|A ∪B) =P (A ∩B)P (A ∪B)

(3.8)

Now, because (A ∪B) = A ∪ (B ∩A∗), so that

P (A ∪B) = P (A) + P (B ∩A∗)

it follows that P (A ∪B) ≥ P (A), so that Eq (3.8) becomes

P (A ∩B|A ∪B) ≤ P (A ∩B)P (A)

and from the identities shown above,

P (A ∩B|A ∪B) ≤ P (A ∩B ∩A)P (A)

= P (A ∩B|A) (3.9)

Equality holds when P (B ∩A∗) = 0, which will occur when B ⊂ A.

Page 30: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

10 CHAPTER 3.

3.20 From the problem definition, we can deduce that(i) The family has two children, each equally likely to be male (M) or female(F );(ii) At least one child is male;(iii) Required: the probability that the other child is female.

From (i), obtain the sample space as

Ω = MM, MF,FM,FF

a set consisting of 4 equally likely outcomes, where MF is distinct from FM .From (iii) we know that the event of interest is EI = MF, FM, that theunknown sibling is female either younger or older; and from (ii), we deduce thatthe conditioning event is EC = MM,MF, FM. From here, therefore,

P (EI |EC) =P (EI ∩ EC)

P (EC)=

P (EI)P (EC)

=23

This result appears counterintuitive at first because one would think (erro-neously) that since we already know that one child is male, then there are onlytwo equally likely options left: the unknown sibling is either male or female.This would then lead to the erroneous conclusion that the required probabilityis 1/2. The error arises because the outcomes MF (the older sibling is male) andFM (the older sibling is female) are separate and distinct, and equally likely;there are therefore in fact three possible outcomes that are consistent with fact(ii) above (at least one male child in the family), not two; and of these possibleoutcomes, two are “favorable.”

3.21 By independence and by the definition of the conditions required for theseries-configured system to function,

P (SS) = P (A)P (B) = 0.99× 0.9 = 0.891

is the probability that the system functions.With the parallel configuration, and by the definition of the conditions re-

quired for the parallel-configured system to fail,

P (FP ) = P (FA)P (FB) = [1− P (A)][1− P (B)]

is the probability that the parallel-configured system fails. The required prob-ability that the system functions is therefore given by:

P (SP ) = 1− P (FP ) = 1− (0.01× 0.1) = 1− 0.001 = 0.999

Clearly the probability that the parallel-configured system functions is higher.This is reasonable because with such a configuration, one component is redun-dant, acting as a back-up for the other component. Because only one componentis required to function for the entire system to function, even if one fails, thesystem continues to function if the redundant component still functions. With

Page 31: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

11

the series component, the opposite is the case: if one component fails, the entiresystem fails.

3.22 From the theorem of total probability,

P (S) = P (S|Ck)P (Ck) + P (S|C∗k)P (C∗k)

and given P (Ck) = 0.9 (so that P (C∗k) = 0.1), in conjunction with the givenconditional probabilities, obtain

P (S) = (0.9× 0.9) + (0.8× 0.1) = 0.89

APPLICATION PROBLEMS

3.23 (i) The required Lithium toxicity probabilities are obtained as follows:

1. From the table, P (L+) = 51/150 = 0.340;

2. P (L+|A+) = 30/47 = 0.638, a moderate value indicating that there isa reasonable chance that the assay will correctly identify high lithiumconcentrations.

3. P (L+|A−) = 21/103 = 0.204, a fairly high (about 20%) chance of misseddiagnoses.

(ii) The required blood lithium assay probabilities are obtained as follows:

1. P (A+) = 47/150 = 0.104;

2. P (A+|L+) = 30/51 = 0.588. This quantity shows the percentage of pa-tients known to have high lithium concentrations that are identified assuch by the assay. Now, given a generic function, y = f(x), ∆y, the“response” in y as a result of a change, ∆x in x, is given by

∆y = S∆x (3.10)

where S = ∂y/∂x, the “local” sensitivity function, indicates how sensitivey is to unit changes in x. In this particular application, by definition ofconditional probabilities,

P (A+ ∩ L+) = P (A+|L+)P (L+)

Here, P (A+ ∩ L+) is representative of the theoretical proportion of theentire population with high lithium toxicity that are correctly identified bythe assay as such, while P (L+) is the proportion of the population withhigh lithium toxicity. By analogy with Eq (3.10), observe that P (A+|L+)plays the role of the sensitivity function.

The computed value of 0.588 (not quite 0.6) indicates that the assay isnot overly sensitive.

Page 32: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

12 CHAPTER 3.

3. From Bayes’ rule,

P (L+|A+) =P (A+|L+)P (L+)

P (A+)=

3051 × 51

15047150

=3047

= 0.638

as has already been computed directly in (i) above.

3.24 The given information translates as follows:

• The sample space is: Ω = T1, T2, T3, T4, T5, where Ti is the outcome thatthe polymorph produced at any particular time is of Type i = 1, 2, . . . , 5;

• The “probability distribution” is P (T1) = 0.3; P (T2) = P (T3) = P (T4) =0.2 and P (T5) = 0.1.

• The “events” are the applications: A = T1, T2, T3; and B = T2, T3, T4(i) The “event” of interest in this case is A = T1, T2, T3, so that

P (A) = 0.3 + 0.2 + 0.2 = 0.7

(ii) The required probability is P (T2|B), which, by definition, is:

P (T2|B) =P (T2 ∩B)

P (B)=

0.20.2 + 0.2 + 0.2

= 1/3

(iii) The required probability P (A|B) is obtained in the usual manner, by defi-nition of conditional probability, i.e.,

P (A|B) =P (A ∩B)

P (B)=

P (T2) + P (T3)0.6

= 2/3

(iv) The converse probability, P (B|A), may be obtained by Bayes’ rule: i.e.,

P (B|A) =P (A|B)P (B)

P (A)=

23 × 0.6

0.7= 4/7.

3.25 Consider a person selected randomly from the population who undergoesthis test; define the following events:D: the event that disease (abnormal) cells are present;S: the event that the sample misses the disease (abnormal) cells;W : the event that the test result is wrong;C: the event that the test result is correct.

(i) First, observe that the test result can be wrong (a) when disease (abnormal)cells are present but the test fails to identify them; or (b) when there are noabnormal cells present but the test misclassifies normal cells as abnormal.

By the theorem of total probability, obtain:

P (W ) = P (W |D)P (D) + P (W |D∗)P (D∗) (3.11)

Page 33: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

13

From the given information, P (D) = θD, so that P (D∗) = 1 − θD; alsoP (W |D∗) = θm. The only term yet to be determined in this equation isP (W |D); and it is associated with the event that the test is wrong given thatdisease cells are present—an event consisting of two mutually exclusive parts:(a) when the disease cells are present but the sample misses them, or (b) whenthe sample actually contains the disease cells but the test fails to identify them.Thus, the required probability is obtained as:

P (W |D) = P (W ∩ S|D) + P (W ∩ S∗|D)= θs(1− θm) + (1− θs)θf (3.12)

Upon introducing Eq (3.12) into Eq (3.11), we obtain

P (W ) = θs(1− θm) + (1− θs)θf θD + θm(1− θD)

and the probability that the test is correct, P (C) is obtained as 1− P (W ).(ii) Let A be the event that an abnormality has been reported. The requiredprobability is P (D∗|A), which, according to Bayes’ Theorem, is obtained as:

P (D∗|A) =P (A|D∗)P (D∗)

P (A)=

P (A|D∗)P (D∗)P (A|D)P (D) + P (A|D∗)P (D∗)

(3.13)

All the terms in this expression are known except P (A|D), which, as in (i) aboveis obtained as follows:

P (A|D) = P (A ∩ S|D) + P (A ∩ S∗|D)= θmθf + (1− θs)(1− θm) (3.14)

so that Eq (3.13) becomes:

P (D∗|A) =θm(1− θD)

θmθf + (1− θs)(1− θm) θD + θm(1− θD)

3.26 With the given information (and θD = 0.02), obtain

P (W ) = (0.1× 0.9 + 0.9× 0.05)× 0.02 + 0.1× 0.98 = 0.1007;⇒ P (C) = 0.8993

The second probability is:

P (D∗|A) = 0.098/(0.0163 + 0.098) = 0.857

a very high probability that the test will return an abnormality result whenthere is in fact no abnormality present.

A major contributor to this problem is the rather high probability of mis-classifying normal cells as abnormal, θm = 0.1. When θm is reduced to a moremanageable 0.01, the results are:

P (W ) = (0.1× 0.99 + 0.9× 0.05)× 0.02 + 0.01× 0.98 = 0.0127

Page 34: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

14 CHAPTER 3.

so that P (C) = 0.987, along with

P (D∗|A) = 0.0098/(0.0178 + 0.0098) = 0.355

Thus, the probability that the test result is correct increases from almost 0.9 to0.987, while the probability of reporting an abnormality in the absence of a realabnormality drops significantly from 0.857 to 0.355.

3.27 (i) P (Q1) = 336425 = 0.791, also

P (Q∗3) = 1− P (Q3) = 1− 18

425 = 0.958(ii) P (Q1|M1) = 110

150 = 0.733; alsoP (Q1|M2 ∪M3) = (150+76)

(180+90) = 226275 = 0.822

(iii) P (M3|Q3) = 118 = 0.056; also P (M2|Q2) = 33

71 = 0.465

3.28 Let B1 be the event that a tax filer selected at random belongs to thefirst income bracket (Below $10, 000); B2, if in the second bracket ($10, 000 −$24, 999); B3, if in the third bracket ($25, 000 − $49, 999); and B4, if in thefourth bracket, ($50, 000 and above). Furthermore, let A be the event that thetax filer is audited. Then the given information may be translated as follows:

• The percent audited column corresponds to P (A|Bi);

• P (Bi) is the entry in the “Number of filers” column divided by 89.8 (thetotal number of filers).

(i) By the theorem of total probability,

P (A) =4∑

i=1

P (A|Bi)P (Bi)

=(31.4× 0.0034) + (30.4× 0.0092) + (22.2× 0.0205) + (5.5× 0.04)

89.8= 0.0065

(The same result is obtained by expressing P (A) as the total number auditeddivided by the total number in the population.)(ii) The required probability, P (B3 ∩A), is obtained as:

P (B3 ∩A) = P (A|B3)P (B3) =22.289.8

× 0.205 = 0.051

(iii) P (B4|A) is obtained from P (B4 ∩ A)/P (A) or, equivalently, by expandingthe indicated ratio of probabilities into Bayes’ rule, i.e.,

P (B4|A) =P (B4 ∩A)

P (A)=

P (A|B4)∑4i=1 P (A|Bi)P (Bi)

= 0.376

Page 35: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 4

Exercises

Section 4.14.1 Under the specified conditions (no twins), the only two possible outcomesafter each single delivery are B, for “Boy,” and G for “Girl”; the desired samplespace is therefore:

Ω = BBB, BBG, BGB, GBB, BGG,GBG,GGB, GGGIf X is the random variable representing the total number of girls born to thefamily, then the random variable space is clearly:

VX = 0, 1, 2, 3Now, given that P (B) = 0.75, implying that P (G) = 0.25, we are able to

obtain first, that:P (BBB) = (0.75)3 = 0.422

Thus, in terms of the random variable, X, this event corresponds to X = 0 (nogirls); i.e., PX(0) = 0.422.

Next, the remaining probabilities for X = 1, 2 and 3 are obtained as follows:

PX(1) = PBBG,BGB,GBB = P (BBG) + P (BGB) + P (GBB)

by virtue of each of the three outcomes being mutually exclusive. And now, sinceP (BBG) = (0.75)2 × 0.25 = 0.1406 and P (BBG) = P (BGB) = P (GBB), weobtain finally that

PX(1) = 0.422

Via similar arguments, we obtain:

PX(2) = P (BGG) + P (GBG) + P (GGB) = 3× 0.047 = 0.141

andPX(3) = P (GGG) = 0.016

The complete probability distribution for all the possible combinations of chil-dren that can be born to this family is represented in the table below (with Xas the total number of girls).

1

Page 36: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 4.

X f(x)0 0.4221 0.4222 0.1413 0.015

TOTAL 1.000

4.2 In this case, the sample space, Ω, is given by:

Ω = HHHH, HHHT, HHTH,HTHH, HHTT,HTHT, HTTH,HTTT,

THHH,THHT, THTH, TTHH, THTT, TTHT, TTTH, TTTT

with a total of 16 elements, ωi; i = 1, 2, . . . , 16, in the order presented above,with ω1 = HHHH,ω2 = HHHT, . . . , ω16 = TTTT .

If X is the total number of tails, then the random variable space is:

V = 0, 1, 2, 3, 4

The set A corresponding to the event that X = 2 therefore consists of 6 mu-tually exclusive outcomes, ω5, ω6, ω7, ω10, ω11, ω12, as defined above. Assumingequiprobable outcomes, as usual, we obtain:

P (X = 2) = 6/16 = 3/8

4.3 (i) From the spaces given in Eqs (4.10), and (4.11) in the text, we obtainthat the event A, that X = 7, is a set consisting of a total of 6 elementaryevents E1 = (1, 6); E2 = (2, 5); E3 = (3, 4); E4 = (4, 3); E5 = (5, 2); E6 = (6, 1).With equiprobable outcomes, we obtain therefore that:

P (A) = P (X = 7) = 6/36 = 1/6

(ii) The set B, representing the event that X = 6, is:

B = (1, 5), (2, 4), (3, 3), (4, 2), (5, 1)

consisting of 5 elements, so that

P (B) = 5/16

The set C can be represented as a union of two disjoint sets, C1 and C2,where C1, representing the event that X = 10, is:

C1 = (4, 6), (5, 5), (6, 4)

while C2, representing the event that X = 11, is:

C2 = (5, 6), (6, 5)

Page 37: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

And now, either by forming C explicitly as the union of these two sets, orby virtue of these sets being disjoint, so that

P (C) = P (C1) + P (C2)

we obtain, upon assuming equiprobable outcomes,

P (C) = 3/36 + 2/36 = 5/36

Section 4.24.4 From Eqn (4.11) in the text, which gives the random variable space as

V = 2, 3, 4, . . . , 12

the desired complete pdf is obtained as follows:Let A2 represent the event that X = 2, then:

A2 = (1, 1), so that P (X = 2) = 1/36.Now, if An represents the event that X = n; n = 2, 3, . . . 12, then similarly,

A3 = (1, 2), (2, 1), so that P (X = 3) = 2/36.A4 = (1, 3), (2, 2), (2, 1), so that P (X = 4) = 3/36.A5 = (1, 4), (2, 3), (3, 2), (4, 1), so that P (X = 5) = 4/36.A6 = (1, 5), (2, 4), (3, 3), (4, 2), (5, 1), so that P (X = 6) = 5/36.A7 = (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1), so that P (X = 7) = 6/36.A8 = (2, 6), (3, 5), (4, 4), (5, 3), (6, 2), so that P (X = 8) = 5/36.A9 = (3, 6), (4, 5), (5, 4), (6, 3), so that P (X = 9) = 4/36.A10 = (4, 6), (5, 5), (6, 4), so that P (X = 10) = 3/36.A11 = (5, 6), (6, 5), so that P (X = 11) = 2/36. Finally,A12 = (6, 6), so that P (X = 12) = 1/36.

The resulting pdf, f(x), and the cdf, F (x) (obtained cumulatively from thevalues shown above), are presented in the table below. A plot of the pdf andcdf are shown in Fig 4.1.

X f(x) F (x)0 0 01 0 02 1/36 1/363 2/36 3/364 3/36 6/365 4/36 10/366 5/36 15/367 6/36 21/368 5/36 26/369 4/36 30/3610 3/36 33/3611 2/36 35/3612 1/36 36/36

Page 38: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 4.

14121086420

0.18

0.16

0.14

0.12

0.10

0.08

0.06

0.04

0.02

0.00

x

f(x)

Probability Distribution Function

14121086420

1.0

0.8

0.6

0.4

0.2

0.0

x

F(x)

Cumulative Distribution Function

Figure 4.1: The pdf and cdf for the double dice experiment.

Page 39: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

4.5 (i) From the given table, the required cdf, F (x), is obtained as shown in thetable below:

x 1 2 3 4 5F (x) 0.10 0.35 0.65 0.90 1.00

(ii) From the given table, obtain:P (X ≤ 3) = 0.65;P (X < 3) = F (2) = 0.35;P (X > 3) = 1− F (3) = 0.35;P (2 ≤ X ≤ 4) = f(2) + f(3) + f(4) = 0.80.

4.6 By definition, for the discrete random variable,

F (x) =x∑

i=1

f(i)

so that

F (x− 1) =x−1∑

i=1

f(i)

from where it is clear that

f(x) = F (x)− F (x− 1)

For the particular cdf given here, the required pdf is obtained as:

f(x) =(x

n

)k

−(

x− 1n

)k

; x = 1, 2, . . . , n

Specifically for k = 2 and n = 8, the cdf and pdf are given respectively as:

F (x) =(x

8

)2

;x = 1, 2, . . . , 8

and

f(x) =(x

8

)2

−(

x− 18

)2

=(

2x− 164

); x = 1, 2, . . . , 8

A plot of the pdf and the cdf is shown in Fig 4.2.

4.7 (i) To be a legitimate pdf, the given function, f(x), must satisfy the followingcondition: ∫ 1

0

cxdx = 1

Page 40: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 4.

9876543210

0.25

0.20

0.15

0.10

0.05

0.00

x

f(x)

Probability Distribution Function

9876543210

1.0

0.8

0.6

0.4

0.2

0.0

x

F(x)

Cumulative Distribution Function

Figure 4.2: The pdf and cdf for the double dice experiment.

Page 41: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

7

Upon carrying out the indicated integration, we obtain:

cx2

2

∣∣∣∣1

0

= 1

from which we determine easily that c = 2, so that the complete pdf is nowgiven by:

f(x) =

2x 0 < x < 10 otherwise

From here, we obtain the required cdf as:

F (x) =∫ x

0

2u du =

x2 0 < x < 10 otherwise

(ii) P (X ≤ 1/2) = F (1/2) = 1/4and P (X ≥ 1/2) = 1− F (1/2) = 3/4.(iii) The required value of xm such that P (X ≤ xm) = P (X ≥ xm), is obtainedfrom

F (xm) = 1− F (xm) ⇒ F (xm) =12

which implies that

F (xm) = x2m =

12⇒ xm =

1√2

= 0.707

4.8 The pdf in question, from Eq (4.41) in the text, is:

f(x) =1τ

e−x/τ ; 0 < x < ∞

from where the cdf, F (x), is obtained as:

F (x) =∫ x

0

e−t/τdt = 1− e−x/τ

Specifically for τ = 30, the required probabilities are now obtained from here asfollows.(i) P (X < 30) = F (30) = 1− e−30/30 = 0.632(ii) P (X > 30) = 1− F (30) = 0.328(iii) P (X < 30 ln 2) = F (30 ln 2) = 1− e− ln 2 = 1− 1

2 = 12

(iv) P (X > 30 ln 2) = 1− F (30 ln 2) = 12

Section 4.34.9 E(X) for the discrete random variable, X, of Exercise 4.5 is given by:

E(X) =∑

i

xif(xi) = (1×0.1)+(2×0.25)+(3×0.3)+(4×0.25)+(5×0.1) = 3.0

Page 42: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

8 CHAPTER 4.

On the other hand, for the continuous random variable in Exercise 4.7,

E(X) =∫

xf(x) dx =∫ 1

0

x(2x)dx =2x3

3

∣∣∣∣1

0

=23

Finally, for the residence time distribution in Eq (4.41), the expected valueis given by:

E(X) =1τ

∫ ∞

0

xe−x/τdx

Now, via direct integration by parts, letting u = x and dv = e−x/τdx, we obtain:∫ ∞

0

xe−x/τdx =1τ

(−xτe−x/τ

)∣∣∣∞

0+

∫ ∞

0

τe−x/τdx

=1τ

0 + τ(−τe−x/τ |∞0 )

= τ

as required. Thus, τ is the expected (or mean) value of the residence time for thesingle CSTR whose residence time, x, follows the distribution given in Eq (4.11).

4.10 For the pdf given in Eq (4.140), the absolute convergence condition for theexistence of the expected value requires that:

∞∑x=1

|x|f(x) = 4∞∑

x=1

1(x + 1)(x + 2)

< ∞

By partial fraction expansion, we obtain, for the right hand side sum:

4∞∑

x=1

1(x + 1)(x + 2)

= 4∞∑

x=1

1

x + 1− 1

x + 2

= 4(

12− 1

3

)+

(13− 1

4

)+

(14− 1

5

)+ · · ·

= limn→∞

(2− 4

n + 2

)= 2

Hence the expected value exists.On the other hand, for the pdf given in Eq (4.141), the absolute convergence

condition for the existence of the expected value requires that:

∞∑x=1

|x|f(x) =∞∑

x=1

1(x + 1)

< ∞

but this sum is not finite; the expected value therefore does not exist for this pdf.

4.11 The expression:∞∑

x=1

p(1− p)x−1 = 1

Page 43: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

9

arises from the property of the discrete random variable pdf that∑

f(x) = 1,since in this case

f(x) = p(1− p)x−1;x = 1, 2, 3, . . .

By differentiating with respect to p on both sides of the summation above,we obtain:

∞∑x=1

(1− p)x−1 −∞∑

x=1

p(x− 1)(1− p)x−2 = 0

Upon factoring out (1 − p)−1 in the second term, we obtain, after expandingthe (x− 1) term, that:

∞∑x=1

(1− p)x−1 −(

11− p

) ∞∑x=1

p x(1− p)x−1 +(

p

1− p

) ∞∑x=1

(1− p)x−1 = 0

which is easily consolidated by combining the first and third terms to obtain:

(1

1− p

) ∞∑x=1

(1− p)x−1 −(

11− p

) ∞∑x=1

p x(1− p)x−1 = 0

which, for p 6= 1, simplifies to:

∞∑x=1

px(1− p)x−1 =∞∑

x=1

(1− p)x−1

The LHS is the definition of the expected value of the random variable, X, whosepdf is as given in Eq (4.142); the RHS is the infinite series 1+ q + q2 + · · · , withq = 1− p, a series that converges to 1

(1−q) or 1/p, hence,

∞∑x=1

p x(1− p)x−1 = 1/p

establishing the required result.

4.12 From the definition of the mathematical expectation function, E(.), forthe discrete random variable,

E[k1g1(X) + k2g2(X)] =∑

i

[k1g1(xi) + k2g2(xi)] f(xi)

=∑

i

k1g1(xi)f(xi) +∑

i

k2g2(xi)f(xi)

= k1

i

g1(xi)f(xi) + k2

i

g2(xi)f(xi)

= k1E[g1(X)] + k2E[g2(X)]

Page 44: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

10 CHAPTER 4.

Similarly, for the continuous random variable,

E[k1g1(X) + k2g2(X)] =∫ ∞

−∞[k1g1(x) + k2g2(x)] f(x)dx

=∫ ∞

−∞k1g1(x)f(x)dx +

∫ ∞

−∞k2g2(x)f(x)dx

= k1

∫ ∞

−∞g1(x)f(x)dx + k2

∫ ∞

−∞g2(x)f(x)dx

= k1E[g1(X)] + k2E[g2(X)]

as required.Now, given E(X) = µ, then

E[(X − µ)3] = E[X3 − 3X2µ + 3Xµ2 − µ3]

and, by virtue of E[.] being a linear operator, we obtain from here:

E[(X − µ)3] = E(X3)− 3µE(X2) + 3µ2E(X)− µ3

= E(X3)− 3µE(X2) + 2µ3 (4.1)

since µ is a constant, and E(X) = µ.Now by definition,

σ2 = V ar(X) = E[(X − µ)2]

which expands out and simplifies to give:

E[(X − µ)2] = E(X2)− 2E(X)µ + µ2 = E(X2)− µ2 (4.2)

From here, by substituting Eq (4.2) into Eq (4.1), we obtain:

E[(X − µ)3] = E(X3)− 3µE[(X − µ2)]− 3µ3 + 2µ3

= E(X3)− 3µσ2 − µ3

as required.

Section 4.44.13 Given two random variables, X and Y , and a third random variable definedas

Z = X − Y

let f1(x) and f2(y) be the individual pdfs for the respective random variables, Xand Y ; further, let f(x, y) be the joint distribution of how x and y vary jointly(see Chapter 5). By definition, (again, see Chapter 5),

f1(x) = ∑

y f(x, y); discrete case∫yf(x, y)dy; continuous case (4.3)

Page 45: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

11

Similarly,

f2(y) = ∑

x f(x, y); discrete case∫x

f(x, y)dx; continuous case (4.4)

By definition, therefore, the required expectations are obtained as:

E(Z) = E(X − Y ) = ∑

x

∑y(x− y)f(x, y); discrete case∫∞

−∞∫∞−∞(x− y)f(x, y)dxdy; continuous case

(4.5)First, for the discrete case, we obtain, from Eq (4.5) that:

E(Z) =∑

x

∑y

xf(x, y)−∑

x

∑y

yf(x, y)

=∑

x

x

(∑y

f(x, y)

)−

∑y

y

(∑x

f(x, y)

)

and now from Eqs (4.3) and (4.4), we obtain:

E(Z) =∑

x

xf1(x)−∑

y

yf2(y)

= E(X)− E(Y )

as required.Similarly for the continuous case, from Eq (4.5), we obtain:

E(Z) =∫ ∞

−∞

∫ ∞

−∞xf(x, y)dxdy −

∫ ∞

−∞

∫ ∞

−∞yf(x, y)dxdy

=∫ ∞

−∞x

[∫ ∞

−∞f(x, y)dy

]dx−

∫ ∞

−∞y

[∫ ∞

−∞f(x, y)dx

]dy

so that, once again, from Eqs (4.3) and (4.4), we obtain:

E(Z) =∫ ∞

−∞xf1(x)dx−

∫ ∞

−∞yf2(y)dy

= E(X)−E(Y )

as required.Now, if E(Z) = µZ , then, by definition,

V ar(Z) = E[(Z − µZ)2] = E(Z2)− µ2Z

Upon substituting Eqs (4.146) and (4.147) of the text respectively for Z andµZ , we obtain:

V ar(Z) = E[(X − Y )2]− (µX − µY )2

= E(X2 − 2XY + Y 2)− (µ2X − 2µXµY + µ2

Y )

Page 46: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

12 CHAPTER 4.

which, by the linearity of the expectation operator, is consolidated to yield:

V ar(Z) = [E(X2)− µ2X ] + [E(Y 2)− µ2

Y ]− 2[E(XY )− µXµY ] (4.6)

And now, the condition that E[(X − µX)(Y − µY )] = 0, upon expansion andapplication of the linearity of the expectation operator, translates to:

E(XY ) = µXµY

with the immediate implication that Eq (4.6) reduces to:

V ar(Z) = [E(X2)− µ2X ] + [E(Y 2)− µ2

Y ] = V ar(X) + V ar(Y ) (4.7)

as required.

4.14 (i) From the given pdf,∑∞

x=0 f(x) is given by:

∞∑x=0

f(x) =∞∑

x=0

λxe−λ

x!

= e−λ∞∑

x=0

λx

x!

= e−λeλ = 1

where we have used the result that∑∞

x=0λx

x! is the infinite series expansion ofeλ.(ii) E(X) is given by:

E(X) =∞∑

x=0

xf(x) =∞∑

x=0

xλxe−λ

x!

= e−λ∞∑

x=0

xλx

x!

Now, because the RHS vanishes for x = 0, this expression may be rearrangedto yield:

E(X) = λe−λ∞∑

x=1

λx−1

(x− 1)!

= λe−λ∞∑

y=0

λy

y!

= λe−λeλ = λ

as required. Finally,(iii) Because V ar(X) = E(X2) − µ2, we may invoke the results in (ii), that µthe mean is λ, to obtain:

V ar(X) = E(X2)− λ2 (4.8)

Page 47: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

13

Now, E(X2) is given, by definition, as:

E(X2) =∞∑

x=0

x2f(x) = e−λ∞∑

x=0

x2 λx

x!

= e−λ∞∑

x=0

xλx

(x− 1)!

Once again, since the term corresponding to x = 0 vanishes, this expressionmay rearranged as:

E(X2) = e−λλ

∞∑x=1

xλx−1

(x− 1)!

from where the change of variable, y = x− 1, yields,

E(X2) = λe−λ∞∑

y=0

(y + 1)λy

y!

= λe−λ

( ∞∑y=0

yλy

y!+

∞∑y=0

λy

y!

)

This expression can be simplified using two earlier results,

∞∑y=0

e−λλy

y!= 1

∞∑y=0

ye−λλy

y!= λ

to obtain:E(X2) = λ(λ + 1)

which, when introduced into Eq (4.8), yields:

V ar(X) = λ2 + λ− λ2 = λ

as required.

4.15 First, we obtain from the pdf in Exercise 4.5 that µ, the expected (mean)value, is:

µ =∑

i

xif(xi) = 0.1 + 0.5 + 0.9 + 1.0 + 0.5 = 3.0

Hence, by definition of variance, with µ = 3, we obtain:

σ2 =∑

i

(xi − µ)2f(xi) = 0.40 + 0.25 + 0.00 + 0.25 + 0.40 = 1.30

Page 48: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

14 CHAPTER 4.

And for the skewness,

µ3 =∑

i

(xi − µ)3f(xi) = −0.80− 0.25 + 0.00 + 0.25 + 0.80 = 0.00

For the continuous random variable in Exercise 4.7, since the pdf is given by

f(x) =

2x 0 < x < 10 otherwise

the expected value, µ, is obtained as:

µ =∫ ∞

−∞xf(x)dx =

∫ 1

0

2x2dx =23

Thus, in this case, the variance is obtained as:

σ2 =∫ 1

0

(x− µ)22xdx =∫ 1

0

(2x3 − 8x2

3+

8x

9

)dx =

118

and the skewness,

µ3 =∫ 1

0

(x− µ)32xdx =∫ 1

0

(2x4 − 6x3µ + 6x2µ2 − 2xµ3

)dx =

−1135

a small, but negative number.Thus, the discrete random variable, with skewness=0, is symmetric while

the continuous random variable, whose skewness is −1/135 is negatively skewed,even if only slightly. These facts are also obvious by inspection of the pdfs.

4.16 Given the linear transformation in Eq (4.94), i.e.,

Y = aX + b

by definition of the MGF, we have that:

MY (t) = E(etY

)= E

[et(aX+b)

]= E

(eatXebt

)

and since a and b are constants, this reduces to:

MY (t) = ebtE(eatX

)

= ebtMX(at)

as required.On the other hand, for the independent sum, Z = X+Y , again, by definition,

MZ(t) = E(etZ

)= E

[et(X+Y )

]= E

(etXetY

)

and now, by independence (see Chapter 5), the expectation of the product on theRHS is the product of expectations (by virtue of the relationship between joint

Page 49: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

15

pdfs and the contributing marginal pdfs for independent random variables—seeChapter 5). Consequently,

MZ(t) = E(etX

)E

(etY

)= MX(t)MY (t)

as required.

4.17 (i) From the given pdf and the definition of the MGF, we have that therequired MGF is:

MX(t) =∫ ∞

0

(etx

) 1τ2

xe−x/τdx =1τ2

∫ ∞

0

xe−(1−τt)x/τdx

From here, via integration by parts, and upon simplification, we obtain:

MX(t) =1

τ(1− τt)

∫ ∞

0

e−(1−τt)x/τdx

=1

τ(1− τt)

1− τt

)

=1

(1− τt)2(4.9)

The expression for the single CSTR’s MGF, given in Eq (4.99) in the text, is:

MX(t) =1

(1− τt)

As such, the expression in Eq (4.9) above (for the MGF of the residence timedistribution of two identical CSTRs in series) is seen to be a square of thecorresponding MGF for the single CSTR. From here, one can conjecture thatthe MGF for the distribution of residence times for n identical CSTRs in serieswill be

MX(t) =1

(1− τt)n

which is, in fact, shown to be correct in Chapter 9 (See Eqs (9.8) and (9.33).)(ii) By definition of the characteristic function,

ϕX(t) = E(ejtX

)

using precisely the same procedure in Example 4.7 in the text, and in (i) above,it is straightforward to obtain the required characteristic functions as follows:

ϕX1(t) =1

(1− jτt)

ϕX2(t) =1

(1− jτt)2

Page 50: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

16 CHAPTER 4.

where the subscripts have been introduced to indicate how many CSTRs are inthe ensemble. The conjectured characteristic function for n identical CSTRs inseries should therefore be:

ϕXn(t) =

1(1− jτt)n

which is confirmed in Chapter 9, Eq (9.34).

4.18 (i) From the given definition of the “psi-function” as:

ψ(t) = ln M(t)

differentiating once with respect to time gives:

ψ′(t) =M ′(t)M(t)

so that in particular, for t = 0, we have:

ψ′(0) =M ′(0)M(0)

but by definition M(0) = 1 and M ′(0) = µ, hence:

ψ′(0) = µ

By differentiating once more with respect to time, we obtain:

ψ′′(t) =M(t)M ′′(t)− [M ′(t)]2

[M(t)]2

and for t = 0, with M(0) = 1, we obtain:

ψ′′(0) = M ′′(0)− [M ′(0)]2

and since by definition, M ′(0) = µ, and M ′′(0) = E[X2], we immediately obtain:

ψ′′(0) = E[X2] = µ2 = σ2

as required.(ii) By definition of M(t) as E[etX ], we have, for the given pdf, that:

M(t) =∞∑

x=0

etx λxe−λ

x!

=∞∑

x=0

(λet)xe−λ

x!

= e−λ∞∑

x=0

(λet)x

x!

Page 51: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

17

and now, since:∞∑

x=0

λx

x!= eλ

then: ∞∑x=0

(λet)x

x!= eλet

so that:M(t) = e−λeλet

= eλ(et−1)

finally giving:ψ(t) = λ(et − 1)

From here, obtainψ′(t) = λet;⇒ ψ′(0) = µ = λ

and also that:ψ′′(t) = λet;⇒ ψ′′(0) = σ2 = λ

as required. As is shown in Chapter 8, the given pdf is that of a Poisson randomvariable; its mean and variance are identical, as shown here.

4.19 By definition, the mode is the value of y, say y∗, where the continuous pdf,f(y), is maximized. In this specific case, we employ the usual calculus procedureto determine where the pdf is maximized as follows. Upon differentiating thegiven pdf once with respect to y, we obtain:

f ′(y) = − C

σ2(y − µ)e−

(y−µ)2

2σ2

with C as the normalizing constant, ( 1σ√

2π). Equating to zero and solving for

y yields:y∗ = µ

and a second derivative confirms that this is indeed a maximum, establishingthat the µ is the mode.

Next, as stated in the problem, the function is clearly symmetric aroundy = µ, implying that: ∫ µ

−∞f(y)dy =

∫ ∞

µ

f(y)dy

but by definition of the median, ym, it is also true that∫ ym

−∞f(y)dy =

∫ ∞

ym

f(y)dy

from where we establish that ym = µ. Thus, for this pdf, the mean, mode, andmedian coincide. (See Chapter 9.)

Page 52: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

18 CHAPTER 4.

4.20 The mode of the given pdf is obtained via calculus as follows. Upondifferentiating the pdf and setting to zero, we obtain:

f ′(x) =1π

−2x

(1 + x2)2= 0

which, for (1 + x2) 6= 0, is solved for x to yield

x∗ = 0

as the mode. Next, observe that the given pdf is symmetric about x = 0; forexample:

f(x) = f(−x)

Thus, by this symmetry, the median is immediately obtained as xm = 0. Thus,the mode and median are seen to coincide.

For extra credit: To establish that µ = E(X) does not exist, we investigatethe absolute integrability condition, i.e., check if

∫ ∞

−∞|x|f(x)dx =

∫ ∞

−∞

|x|1 + x2

dx

is finite. Because of symmetry about x = 0, we may rewrite this as follows:

∫ ∞

−∞

|x|1 + x2

dx =1π

∫ ∞

0

2x

1 + x2dx

and the change of variables,u = 1 + x2

allows us to simplify the integral to yield:

∫ ∞

−∞

|x|1 + x2

dx =1π

∫ ∞

1

1u

du

=1π

( ln u|∞1 )

=1π

limu→∞

(lnu)

which is not finite. Hence, the expectation does not exist for this pdf.

4.21 The median, xm, is obtained from∫ xm

0

xdx =12

or,x2

2

∣∣∣∣xm

0

=12⇒ xm = 1

Page 53: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

19

Similarly, the first quartile, xq1 , is obtained from:∫ xq1

0

xdx =14

which simplifies to give:

xq1 =√

22

= 0.707

and the third quartile, xq3 , is obtained from:∫ xq3

0

xdx =34

which simplifies to give:

xq3 =

√32

= 1.225

4.22 By definition, the entropy for this random variable is

H(X) = −(1− p) log2(1− p)− p log2 p (4.10)

The maximum of this expression as a function of p may be determined via theusual calculus route, by differentiating with respect to p and equating to zero:i.e.,

dH(X)dp

= log2(1− p) + c− log2 p− c = 0

where c = 1/ ln 2 (since log2 y = ln y/ ln 2). This expression simplifies to yield:

log2

(1− p

p

)= 0 ⇒

(1− p

p

)= 1

which, when solved for p gives the required result:

p = 0.5

(A second derivative establishes that this is indeed a maximum.) Upon intro-ducing this into Eq (4.10) above yields:

H∗(X) = −0.5 log2 0.5− 0.5 log2 0.5 = − log2 0.5 = 1

Thus, the entropy of this binary random variable is maximized when p = 0.5(i.e., for equiprobable outcomes); and the maximum entropy attained at thisvalue of p is 1.

Section 4.54.23 From Eq (4.128) in the text, we know that S(x), the survival function forfor the random variable, X, the residence time in a CSTR, is:

S(x) = e−x/τ or S(x) = e−ηx

Page 54: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

20 CHAPTER 4.

where η = 1τ . And now, from Eq (4.136) relating cumulative hazard function,

H(x), to S(x), i.e.,H(x) = − log[S(x)]

we obtain immediately that for this specific random variable,

H(x) = ηx

as required.Thus, for a related random variable, Y , whose cumulative hazard function

is given byH(y) = (ηy)ζ

where ζ is a constant parameter, using Eq (4.136) again, we obtain the corre-sponding survival function immediately as

S(y) = e−(ηy)ζ

and now, since S(y) = 1− F (y), we obtain the cdf, F (y), as

F (y) = 1− e−(ηy)ζ

From here, by differentiating once with respect to y, we obtain the required pdf,f(y), as:

f(y) = ηζ (ηy)ζ−1e−(ηy)ζ

4.24 From the given pdf, we obtain the cdf, F (x), as:

F (x) =∫ x

0

1τ2

ue−u/τdu

where the usual technique of integration by parts yields, upon some rearrange-ment,

F (x) = 1− e−x/τ − x

τe−x/τ

from where we immediately obtain

S(x) = 1− F (x) = e−x/τ +x

τe−x/τ

= e−x/τ(1 +

x

τ

)= e−x/τ

(τ + x

τ

)

This is to be compared with the corresponding expression for the single CSTR,i.e.,

S1(x) = e−x/τ

with the difference being the additional multiplicative term in the expressionfor two CSTRs.

Page 55: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

21

The hazard function is obtained from:

h(x) =f(x)S(x)

=1τ2 xe−x/τ

e−x/τ − xτ e−x/τ

which simplifies to give:

h(x) =1τ

(1

τ + x

)

This is to be compared with the corresponding expression for the single CSTR,i.e.,

h1(x) =1τ

again, with the difference being the additional multiplicative term in the ex-pression for two CSTRs.

Application Problems

4.25 (i) Let A represent the outcome that a batch of resins is acceptable, and U ,that it is unacceptable. Since a total of 4 independently manufactured batchesare sent by the supplier weekly, then the sample space, Ω, is clearly given by:

Ω = AAAA,AAAU,AAUA, AUAA,AAUU,AUAU,AUUA,AUUU,

UAAA,UAAU,UAUA, UUAA,UAUU,UUAU,UUUA,UUUUThe set has a total of 16 elements, ωi; i = 1, 2, . . . , 16, in the order presentedabove, with ω1 = AAAA,ω2 = AAAU, . . . , ω16 = UUUU .

By defining X as the total number of acceptable batches per week, we obtainthe random variable space as:

V = 0, 1, 2, 3, 4(ii) Upon assuming equal probability of acceptance and rejection, i.e., P (A) =P (U) = 1

2 ,the following pdf, f(x), is obtained quite straightforwardly from thepre-images of the indicated values of X in Ω:

X f(x)0 1/161 4/162 6/163 4/164 1/16

From the conditions given for the supplier’s profitability, the required prob-ability (that the supplier will remain profitable) is obtained as:

P (X ≥ 3) = f(3) + f(4) = 5/16 = 0.3125

Page 56: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

22 CHAPTER 4.

which is quite low, just barely above 0.3.

4.26 With P (A) = 0.8 (so that P (B) = 0.2), we find thatP (X = 0) = P (UUUU) = 0.24 = 0.0016. Similarly,P (X = 1) = 4× 0.23 × 0.8 = 0.0256P (X = 2) = 6× 0.22 × 0.82 = 0.1536P (X = 3) = 4× 0.2× 0.83 = 0.4096P (X = 4) = 0.84 = 0.4096Thus, the new pdf is:

X f(x)0 0.00161 0.02562 0.15363 0.40964 0.4096

TOTAL 1.0000

Now, let RW be the weekly revenue realized by the supplier. From the giveninformation, we observe that this random variable is related to the randomvariable X according to:

RW = 20, 000X − (4−X)8000 = 28, 000X − 32, 000

This is because, with X as the number of acceptable batches (for which therevenue is $20,000 per batch), the total number of unacceptable batches in eachweekly shipment will be (4 −X), for which the supplier incurs a loss of $8000per batch. As a result, the expected revenue is obtained as:

E(RW ) = 28, 000E(X)− 32, 000

And now, E(X) is obtained directly from the new pdf as:

E(X) =∑

x

xf(x) = 0.0256 + 2× 0.1536 + 3× 0.4096 + 4× 0.4096

= 3.2

As a result,E(RW ) = 28, 000× 3.2− 32, 000 = $57, 600

4.27 (i) From the given pdf, we see that

f(∞) =(

1− η

ζ

) (η

ζ

)∞

Now, so long as η < ζ, it will be true that 0 < η/ζ < 1; and since, for all r suchthat 0 < r < 1, it is also true that

limn→∞

(rn) = 0

Page 57: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

23

then we immediately obtain that for this given pdf, under the stipulated condi-tions, η < ζ,

limn→∞

f(x) = 0

establishing the required result: the probability that the line at the gas stationwill be infinitely long is zero.(ii) Let q = η/ζ; then the pdf is given by:

f(x) = (1− q)qx

and the expected value is given by:

E(X) = (1− q)∞∑

x=0

xqx

=q

1− q=

η

ζ − η

Given the value η = 3, and for E(X) = 2, we are able to solve this equation forζ to obtain:

ζ = 4.5 cars/hour

(iii) The required probability, P (X > 2), is obtained from:

P (X > 2) = 1− P (X ≤ 2)= 1− f(0) + f(1) + f(2)

In this case, with η/ζ = 3/4.5 = 2/3, we obtain:

f(x) =(

13

)(23

)x

so that f(0) = 1/3; f(1) = 2/9; f(2) = 4/27; hence, the required probability,that there are more than two cars at the station, is obtained as

P (X > 2) = 8/27

The probability that there are no cars, f(0), is 1/3.

4.28 (i) The histogram is plotted in Fig 4.3; it indicates a distribution that isskewed to the right, as is typical of income distributions. (ii) From the data,the mean is obtained as:

x =∑

i

xif(xi) = (2.5× 0.04)+ (7.5× 0.13)+ · · ·+(57.5× 0.01) = $20.7 (×103)

Next, we observe from the data table that, cumulatively, 34% of the popula-tion have incomes up to $15,000, and up to $20,000 for 54% of the population.The 50% mark therefore lies somewhere in the [15-20] income group. Since the

Page 58: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

24 CHAPTER 4.

52.542.532.522.512.52.5

20

15

10

5

0

x

Frequency(%)

Figure 4.3: Histogram for US family income in 1979.

midpoint of this group is 17.5, we take this value as the median, since no otheradditional information is available. Thus, the median, xm, is given by:

xm = 17.5

From the histogram, or directly from the data table, we see that the [15-20]group (with a mid-point of 17.5) is the “most popular” group, with 20% of thepopulation; the mode, x∗, is therefore determined as:

x∗ = 17.5

Next, the variance and skewness are obtained as follows.

σ2 =∑

i

(xi − x)2f(xi) = 128.76 ⇒ σ = 11.35

and,µ3 = E(X − µ)3 =

i

(xi − x)3f(xi) = 1218.64

so that the coefficient of skewness, γ3, is obtained as:

γ3 =µ3

σ3= 0.8341

implying that the distribution is positively skewed (as is also evident from theshape of the histogram).(iii) Let L,M , and U represent the outcome that a single individual selectedfrom the population is in the “Lower Class,” the “Middle Class,” and the “UpperClass,” respectively. Then, the following probabilities are easily determined

Page 59: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

25

directly from the frequency table and the given income ranges that constituteeach group classification:

P (L) = 0.34P (M) = 0.64P (U) = 0.02

From here, we obtain that(a) P (L,L) = P (L)P (L) = 0.1156(b) P (M, M) = P (M)P (M) = 0.4096(c) P (M, U) = P (M)P (U) = 0.0128(d) P (U,U) = P (U)P (U) = 0.0004

(iv) If an individual selected at random is an engineer, the immediate implicationis that the individual’s income falls into the [20–55] bracket. From the table, thetotal percentage of the population in this bracket is 45, composed of 1% in theupper income bracket (those in the bracket from [50–55]), and the remaining44% in the middle income bracket. The proportion of engineers in these twogroups is 0.01 (since engineers make up 1% of the entire population). Thus, theprobability that a person selected at random is in the middle class, conditionedupon the fact that this individual is an engineer, is:

P (M |E) =0.44× 0.010.45× 0.01

=4445

= 0.978

To compute the converse probability, P (E|M), one may invoke Bayes’ The-orem, i.e.,

P (E|M) =P (M |E)P (E)

P (M)

and since P (E) = 0.01, and P (M) = 0.64, we immediately obtain the requiredprobability as:

P (E|M) =0.978× 0.01

0.64= 0.0153

Observe that these two probabilities, P (M |E) and P (E|M), are drasticallydifferent, but the computed values make sense. First, the exceptionally highvalue determined for P (M |E) makes sense because the engineers in the popu-lation, by virtue of their salaries, are virtually all in the middle class, the onlyexceptions being a small fraction in the upper class. As a result, if it is giventhat an individual is an engineer, then it is nearly certain that the individual inquestion will be in the middle class. The value of P (M |E) reflects this perfectly.

On the other hand, P (E|M) is extremely low because the conditioning “set”is the income bracket: in this case, the defining characteristic is the fact thatthere are many more individuals in the middle class income bracket that are notengineers (recall that engineers make up only 1% of the total population). Thusif it is given that an individual is in the middle class, the chances that such anindividual will be an engineer is quite small. However, because the middle class

Page 60: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

26 CHAPTER 4.

is “over-represented” within the group of engineers, it is not surprising that0.0153, the value determined for P (E|M), even though comparatively small,is still some 15% higher than the value of 0.01 obtained for the unconditionalP (E) in the entire population.

4.29 (i) From the problem statement, observe that xw is to be determined suchthat no more than 15% of the chips have lifetimes lower than xw; i.e., the upperlimit of xw (a number to be determined as a whole integer) is obtained from

P (X ≤ xw) = 0.15

From the given pdf, if we let η = 1/β, we then have:

0.15 =∫ xw

0

ηe−ηxdx

= 1− e−ηxw

which is easily solved for xw, given η = 1/6.25 = 0.16, to obtain:

xw = 1.016 (4.11)

as the upper limit. Thus, in whole integers, the warranty should be set at xw = 1year.(ii) It is possible to use the survival function, S(x), directly for this part of theproblem, since what is required is P (X > 3) = 0.85, or S(x) = 0.85 (for x = 3).Either from a direct integration of the given pdf, or from recalling the exactform of the survival function for the random variable whose pdf is given in Eq(4.163), (See Example 4.8 in the text), we obtain:

S(x) = e−x/β (4.12)

so that for x = 3, and S(x) = 0.85, we solve the equation

0.85 = e−3/β∗2 (4.13)

for β∗2 to obtain:β∗2 = 1/0.054 (4.14)

The implication is that the target mean life-span should be 1/0.054 = 18.52years for the next generation chip. From an initial mean life-span of 6.25 years,the implied “fold increase” in mean life-span is 2.96 or about 3-fold.

4.30 (i) The required probability is P (X ≥ 4). For the younger patient, withE(X) = 2.5 (and V ar(X) = 1.25), Markov’s inequality states that:

P (X ≥ 4) ≤ 2.54

= 0.625

For the older patient, with E(X) = 1 (and V ar(X) = 0.8), Markov’s inequalitystates that:

P (X ≥ 4) ≤ 14

= 0.25

Page 61: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

27

so that with n = 5, the upper bound on the probability of obtaining a set ofquadruplets or quintuplets is significantly higher for the younger patient (withp = 0.5) than for the older patient (with p = 0.2).

To determine Chebyshev’s inequality for each patient, we know that, for thisapplication:

µ + kσ = 4 ⇒ k =4− µ

σ

so that for the younger, k is given by:

k =1.5√1.25

and therefore,1k2

= 0.556

Thus, in this case, Chebyshev’s inequality states that:

P (X ≥ 4) ≤ 0.556

which is a bit tighter than the bound provided by Markov’s inequality.Similarly for the older patient,

k =3√0.8

so that1k2

= 0.089

Therefore, Chebyshev’s inequality states in this case that:

P (X ≥ 4) ≤ 0.089

which is also much tighter than the bound provided by Markov’s inequality.(ii) From the given pdf, we obtain, first for the younger patient:

P (X ≥ 4) = f(4) + f(5) = 0.1875

This shows that while both inequalities are “validated” to be true, in the sensethat the actual probability lies within the prescribed bounds, the actual valueof 0.1875 is quite far from the upper bounds of 0.625 and 0.556.

For the older patient, the actual probability, P (X ≥ 4), is obtained as:

P (X ≥ 4) = f(4) + f(5) = 0.0067

Again, this shows that both inequalities are also “validated” as true for thispatient; however, the actual value of 0.0067 is also quite far from the upperbounds of 0.25 and 0.089.

In both cases Chebyshev’s inequality is sharper than Markov’s.

Page 62: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

28 CHAPTER 4.

4.31 Let A be the event that an individual that is currently y years of agesurvives to 65 and beyond; and let B be the complementary event that thisindividual does not survive until age 65 (by dying at age y < 65).(i) For a policy based on a fixed premium, α, paid annually beginning at age y,then, over the entire life span of an individual that survives beyond 65, XA, thetotal amount collected in premiums from such an individual will be:

XA = α(65− y)

The corresponding probability that this amount will be collected over this life-time is PS(y), the probability of survival to age 65 at age y, indicated in thesupplied table.

On the other hand, in the event that an individual does not survive toage 65, the payout, XB = π(y), is age-dependent, with associated probability,(1 − PS(y)). Thus, the expected revenue per individual, over the individual’slifetime, is given by:

α(65− y)PS(y)− π(y)(1− PS(y)) = RE(65− y) (4.15)

where RE is the expected revenue per year, per participant, over the durationof his/her participation. When Eq (4.15) is solved for π(y), the result is:

π(y) =(65− y)(αPS(y)−RE)

1− PS(y)(4.16)

And now, specifically for a fixed annual premium, α = 90.00, and for a targetexpected (per capita) revenue, RE = 30, the computed values for the age-dependent payout are shown in the table below.

y PS(y) $π(y)0 0.72 8078.5710 0.74 7742.3120 0.74 6334.6230 0.75 5250.0035 0.76 4800.0040 0.77 4271.7445 0.79 3914.2950 0.81 3386.8455 0.85 3100.0060 0.90 2550.00

(ii) For a policy based instead on a fixed payout, π, the corresponding age-dependent annual premium, α(y), is obtained by solving Eq (4.15) for α toyield:

α(y) =RE

PS(y)+

π

(65− y)

(1

PS(y)− 1

)(4.17)

Thus, for the indicated specific value π = 8000, with RE = 30 as before, thecomputed values for the age-dependent annual premiums are shown in the tablebelow.

Page 63: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

29

y PS(y) $α(y)0 0.72 89.5310 0.74 91.6520 0.74 103.0030 0.75 116.1935 0.76 123.6840 0.77 134.5545 0.79 144.3050 0.81 162.1455 0.85 176.4760 0.90 211.11

Note that with the fixed, ninety-dollar annual premium policy of part (i), and itsresultant declining payout, only individuals for which y = 0 will receive a payouthigher than $8000.00. With a fixed payout of $8000.00 for all participants, weobserve from the table above that, in this case, individuals for which y = 0 willpay an annual premium that is lower than $90.00; the premiums to be paid byall others are higher than $90.00 and increase with age.(iii) If the target expected revenue is to increase by 50% (from $30 per year, perparticipant, to $45), using Eqs (4.16) and (4.17) given above respectively for thetime-dependent payout (fixed premium), and time-dependent premium (fixedpayout), we obtain the results shown in the following table for both the age-dependent payout (fixed premium, α = 90), and the age-dependent premium(fixed payout, π = 8000).

y PS(y) $π(y) $α(y)(α = 90) (π = 8000)

0 0.72 4596.43 110.3610 0.74 4569.23 111.9220 0.74 3738.46 123.2730 0.75 3150.00 136.1935 0.76 2925.00 143.4240 0.77 2641.30 154.0345 0.79 2485.71 163.2950 0.81 2202.63 180.6655 0.85 2100.00 194.1260 0.90 1800.00 227.78

As expected, when compared with the results in (i) and (ii), we observe that thepayouts are now uniformly lower (for the same fixed premium α = 90), and theannual premiums are uniformly higher (for the same fixed payout π = 8000).(iv) We return to the problem formulation in (i) and (ii), this time with eachprobability of survival increased by 0.05, (retaining the same RE). Once again,using Eqs (4.16) and (4.17), this time with the new values P+

S (y) for the survivalprobabilities, the results are shown in the table below.

Page 64: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

30 CHAPTER 4.

y P+S (y) $π(y) $α(y)

(α = 90) (π = 8000)0 0.77 11,106.50 75.7210 0.79 10,764.30 76.6420 0.79 8807.10 85.2330 0.80 7350.00 94.6435 0.81 6773.70 99.5940 0.82 6083.33 106.8345 0.84 5700.00 111.9150 0.86 5078.60 121.7155 0.90 5100.00 122.2260 0.95 5550.00 115.79

These results warrant a closer look.

6050403020100

11000

10000

9000

8000

7000

6000

5000

4000

3000

2000

Age, Y

Pa

yo

ut,

$

Standard

Inc Revenue

Ps(y)+0.05

Variable

Figure 4.4: Age-dependent payout for fixed annual premium of $90: Standard case, darkcircles, solid line; increased revenue case, squares, long dashed line; increased survivalprobabilities, diamonds, short dashed line.

Because probabilities of survival have increased, cumulatively more moneywill be paid into the pool by the participants in the long run, since each onewill, on average, live longer. It is therefore consistent that the payouts shouldbe higher (for the same fixed premium, α = 90, and the same expected percapita revenue). Similarly, for the same fixed payout, π = 8000, it is consistentthat the premiums should be lower. However, something interesting happens atage 55: the payouts that had been decreasing monotonically (for a fixed annualpremium) now begin to increase with age; similarly, the annual premiums thathad been increasing monotonically (for a fixed payout) now begin to decrease.This is seen clearly in Figs 4.4 and 4.5, which show all the cases investigated thusfar in this problem: the standard case results of parts (i) and (ii) (circles, solid

Page 65: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

31

6050403020100

250

200

150

100

Age, Y

An

nu

al P

rem

ium

Standard

Inc Revenue

Ps(y) + 0.05

Variable

Figure 4.5: Age-dependent annual premiums for fixed payout of $8000: Standard case,dark circles, solid line; increased revenue case, squares, long dashed line; increased survivalprobabilities, diamonds, short dashed line.

line); the increased per capita revenue case of part (iii) (squares, long dashedline); and the increased survival probabilities case (diamonds, short dashed line).

The results obtained when the probabilities of survival have increased makeno sense financially: older participants enrolling at age 55 (and later) should notpay lower premiums or receive higher payouts upon death than those enrollingat 50. The reason for this anomaly is that with longer life expectancies (indi-cated by the increased probabilities of survival beyond 65), the computationalhorizon should also be increased commensurately. The entire problem shouldbe reformulated for a survival threshold higher than 65 (beyond which there isno payout).

Page 66: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 5

Exercises

Sections 5.1 and 5.25.1 The sample space, Ω, given in Example 5.1 in the text, is:

Ω = HH, HT, TH, TT

consisting of all 4 possible outcomes; or, when these outcomes are representedrespectively, as ωi; i = 1, 2, 3, 4, it may be represented as:

Ω = ω1, ω2, ω3, ω4

Now, upon defining the two-dimensional random variable X = (X1, X2), whereX1 is the total number of heads, and X2, the total number of tails, we obtainthe following mappings as a result:

X(ω1) = (2, 0); X(ω2) = (1, 1); X(ω3) = (1, 1); X(ω4) = (0, 2)

The corresponding random variable space, V , is therefore obtained as:

V = (2, 0), (1, 1), (0, 2)

By assuming equiprobable outcomes, we obtain the following probabilities:

PX(0, 2) = P (ω4) = 14

PX(1, 1) = P (ω2) + P (ω3) = 12

PX(2, 0) = P (ω1) = 14

The full pdf is now given, for x1 = 0, 1, 2, and x2 = 0, 1, 2, as follows:

f(0, 0) = 0; f(1, 0) = 0; f(2, 0) = 14

f(0, 1) = 0; f(1, 1) = 12 ; f(2, 1) = 0

f(0, 2) = 14 ; f(1, 2) = 0; f(2, 2) = 0

1

Page 67: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 5.

Note that for this problem, X1 and X2 must satisfy the constraint

X1 + X2 = 2

so that events that do not meet this constraint are impossible; the probabilityof occurrence of such events are therefore zero. The same complete joint pdf,f(x1, x2), may therefore be represented in tabular form as follows:

X1 → 0 1 2X2 ↓0 0 0 1/41 0 1/2 02 1/4 0 0

5.2 (i) From the given pdf, we are able to determine the required probabilitiesas follows:(a) P (X1 ≥ X2) = f(1, 1) + f(2, 1) + f(2, 2) = 3

4(b) P (X1 + X2 = 4) = f(1, 3) + f(2, 2) = 3

16(c) P (|X1 −X2| = 1) = f(1, 2) + f(2, 1) + f(2, 3) = 9

16(d) P (X1 + X2 is even ) = f(1, 1) + f(1, 2) + f(2, 2) = 7

16

(ii) The joint cdf, F (x1, x2), by definition, is:

F (x1, x2) =x1∑

ξ1=1

x2∑

ξ2=1

f(ξ1, ξ2)

In this specific case, upon introducing the values for the joint pdf, the result isas follows:

F (1, 1) = f(1, 1) = 14

F (1, 2) = f(1, 1) + f(1, 2) = 38

F (1, 3) = f(1, 1) + f(1, 2) + f(1, 3) = 716

F (2, 1) = f(1, 1) + f(2, 1) = 58

F (2, 2) = f(1, 1) + f(1, 2) + f(2, 1) + f(2, 2) = 78

F (2, 3) = f(1, 1) + f(1, 2) + f(1, 3) + f(2, 1) + f(2, 2) + f(2, 3) = 1616

A plot of this discrete cdf is shown in Fig 5.1.

5.3(i) The required sample space is:

Ω =

(W,W ), (W,L), (W,D)(L,W ), (L,L), (L,D)(D, W ), (D, L), (D,D)

a space with a total of nine elements ω1, ω2, . . . , ω9, ordered from left to rightand from top to bottom in the array. The first entry in each ordered pair, the

Page 68: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

3

2

0.4

0.6

0.8

1.0

1.0

1.5 1

2.0

F(x1,x2)

X2

X1

Cumulative Distribution Function F(x1,x2)

Figure 5.1: Cumulative distribution function, F (x1, x2), for Problem 5.2.

outcome of the first game, is distinct from the second, the outcome of the secondgame. Thus, for example, ω2, representing the outcome that the player winsthe first game but loses the second, is distinguishable from ω4 where the reversetakes place, with the player losing the first game and winning the second.(ii) Defining the two-dimensional random variable X = (X1, X2), where X1 isthe total number of wins, and X2 is the total number of draws, produces thefollowing mapping:

X(ω1) = (2, 0); X(ω2) = (1, 0); X(ω3) = (1, 1);X(ω4) = (1, 0); X(ω5) = (0, 0); X(ω6) = (0, 1);X(ω7) = (1, 1); X(ω8) = (0, 1); X(ω9) = (0, 2)

The corresponding random variable space is therefore:

V = (2, 0), (1, 0), (1, 1), (0, 0), (0, 1), (0.2)a set consisting of 6 elements.

By assuming equiprobable outcomes, we obtain the following probabilities:

PX(2, 0) = P (ω1) = 19

PX(1, 0) = P (ω2) + P (ω4) = 29

PX(1, 1) = P (ω3) + P (ω7) = 29

PX(0, 0) = P (ω5) = 19

PX(0, 1) = P (ω6) + P (ω8) = 29

PX(0, 2) = P (ω9) = 19

The full joint pdf, f(x1, x2), for x1 = 0, 1, 2, and x2 = 0, 1, 2, is:

Page 69: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 5.

f(0, 0) = 19 ; f(1, 0) = 2

9 ; f(2, 0) = 19

f(0, 1) = 29 ; f(1, 1) = 2

9 ; f(2, 1) = 0f(0, 2) = 1

9 ; f(1, 2) = 0; f(2, 2) = 0

where, because of the constraint inherent to this problem, i.e.,

X1 + X2 ≤ 2

events that do not satisfy this constraint, being impossible, are assigned thecommensurate probability of zero. The joint pdf, f(x1, x2), may therefore berepresented in tabular form as follows:

X1 → 0 1 2X2 ↓0 1/9 2/9 1/91 2/9 2/9 02 1/9 0 0

(iii) If 3 points are awarded for a win, and 1 point for a draw, let Y be the totalnumber of points awarded to a player. It is true, then, that:

Y = 3X1 + X2

which leads to the following mapping:

Y (ω1) = 6; Y (ω2) = 3; Y (ω3) = 4;Y (ω4) = 3; Y (ω5) = 0; Y (ω6) = 1;Y (ω7) = 4; Y (ω8) = 1; Y (ω9) = 2

with a corresponding random variable space:

VY = 0, 1, 2, 3, 4, 6The resulting pdf, fY (y), is obtained immediately as:

fY (0) = 19 ; fY (1) = 2

9 ; fY (2) = 19 ;

fY (3) = 29 ; fY (4) = 2

9 ; fY (5) = 0; fY (6) = 19

From here, the required probability, P (Y ≥ 4), is obtained as:

P (Y ≥ 4) = fY (4) + fY (5) + fY (6) =39, or

13

Thus, a player for which all possible two-game combinations in Ω are equallylikely, the probability of qualifying for the tournament is 1/3, which is low.

5.4 (i) For Suzie the superior player, with pW = 0.75, pD = 0.2, and pL = 0.05,we obtain the following joint probability distribution.

Page 70: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

PX(2, 0) = P (ω1) = (0.75)2 = 0.5625PX(1, 0) = P (ω2) + P (ω4) = 2(0.75× 0.05) = 0.075PX(1, 1) = P (ω3) + P (ω7) = 2(0.75× 0.2) = 0.3PX(0, 0) = P (ω5) = (0.05)2 = 0.0025PX(0, 1) = P (ω6) + P (ω8) = 2(0.2× 0.05) = 0.02PX(0, 2) = P (ω9) = (0.2)2 = 0.04

The complete joint pdf, fS(x1, x2), for x1 = 0, 1, 2, and x2 = 0, 1, 2, is:

fS(0, 0) = 0.0025; fS(1, 0) = 0.075; fS(2, 0) = 0.5625fS(0, 1) = 0.02; fS(1, 1) = 0.30; fS(2, 1) = 0fS(0, 2) = 0.04; fS(1, 2) = 0; fS(2, 2) = 0

which may be represented in tabular form as:

fS(x1, x2)X1 → 0 1 2X2 ↓0 0.0025 0.075 0.56251 0.0200 0.300 02 0.0400 0 0

(ii) Similarly, for Meredith the mediocre player, with pW = 0.5, pD = 0.3,and pL = 0.2, the joint probability distribution is as follows.

PX(2, 0) = P (ω1) = (0.5)2 = 0.25PX(1, 0) = P (ω2) + P (ω4) = 2(0.5× 0.2) = 0.20PX(1, 1) = P (ω3) + P (ω7) = 2(0.5× 0.3) = 0.30PX(0, 0) = P (ω5) = (0.2)2 = 0.04PX(0, 1) = P (ω6) + P (ω8) = 2(0.2× 0.3) = 0.12PX(0, 2) = P (ω9) = (0.3)2 = 0.09

The complete joint pdf, fM (x1, x2), is:

fM (0, 0) = 0.04; fM (1, 0) = 0.20; fM (2, 0) = 0.25fM (0, 1) = 0.12; fM (1, 1) = 0.30; fM (2, 1) = 0fM (0, 2) = 0.09; fM (1, 2) = 0; fM (2, 2) = 0

In tabular form, fM (x1, x2) is:

fM (x1, x2)X1 → 0 1 2X2 ↓0 0.04 0.20 0.251 0.12 0.30 02 0.09 0 0

Page 71: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 5.

(iii) Finally, for Paula the poor player, with pW = 0.2, pD = 0.3, and pL = 0.5,the following is the joint probability distribution.

PX(2, 0) = P (ω1) = (0.2)2 = 0.04PX(1, 0) = P (ω2) + P (ω4) = 2(0.2× 0.5) = 0.20PX(1, 1) = P (ω3) + P (ω7) = 2(0.2× 0.3) = 0.12PX(0, 0) = P (ω5) = (0.5)2 = 0.25PX(0, 1) = P (ω6) + P (ω8) = 2(0.5× 0.3) = 0.30PX(0, 2) = P (ω9) = (0.3)2 = 0.09

Once more, the complete joint pdf, fP (x1, x2), is:

fP (0, 0) = 0.25; fP (1, 0) = 0.20; fP (2, 0) = 0.04fP (0, 1) = 0.30; fP (1, 1) = 0.12; fP (2, 1) = 0fP (0, 2) = 0.09; fP (1, 2) = 0; fP (2, 2) = 0

or, in tabular form:

fP (x1, x2)X1 → 0 1 2X2 ↓0 0.25 0.20 0.041 0.30 0.12 02 0.09 0 0

Now, by defining the random variable, Y = 3X1 +X2, representing the totalnumber of points awarded to each player, then in all cases, the set Q defined as:

Q = Y : y ≥ 4

represents the event that a player qualifies for the tournament (having receivedat least 4 points). In this case, in terms of the original sample space,

Q = ω1, ω3, ω7

so that

PY (Q) = P (ω1) + P (ω3) + P (ω7) = PX(2, 0) + PX(1, 1)

Therefore, for Suzie,

PY (Y ≥ 4) = 0.5625 + 0.30 = 0.8625

for Meredith,PY (Y ≥ 4) = 0.25 + 0.30 = 0.55

and, for Paula,PY (Y ≥ 4) = 0.04 + 0.12 = 0.16

Page 72: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

7

Thus, the probability that the superior player qualifies for the tournament is areasonably high 0.8625; the probability that the mediocre player qualifies is amoderate 0.55, and for the poor player, the probability of qualifying is a verylow 0.16.

5.5 (i) The condition to be satisfied is:∫ 1

0

∫ 2

0

cx1x2(1− x2)dx1dx2 = 1

We may now carry out the indicated integrations, first with respect to x1, andthen x2, i.e.,

∫ 1

0

∫ 2

0

cx1x2(1− x2)dx1dx2 = c

∫ 1

0

x2(1− x2)

(x2

1

2

∣∣∣∣2

0

)dx2

= 2c

∫ 1

0

(x2 − x22)dx2

= 2c

(x2

2

2− x3

2

3

)∣∣∣∣1

0

=c

3= 1

which is solved for c to yield the required result:

c = 3

The complete joint pdf is therefore given by:

f(x1, x2) =

3x1x2(1− x2); 0 < x1 < 2; 0 < x2 < 10; elsewhere (5.1)

(ii) The required probabilities are obtained from Eq (5.1) above as follows:

P (1 < x1 < 2; 0.5 < x2 < 1) = 3∫ 1

0.5

∫ 2

1

x1x2(1− x2)dx1dx2

= 3∫ 1

0.5

x2(1− x2)(∫ 2

1

x1dx1

)dx2

=92

∫ 1

0.5

(x2 − x22)dx2 =

38

Similarly,

P (x1 > 1; x2 < 0.5) = 3∫ 0.5

0

∫ 2

1

x1x2(1− x2)dx1dx2

= 3∫ 0.5

0

x2(1− x2)(∫ 2

1

x1dx1

)dx2

=92

∫ 0.5

0

(x2 − x22)dx2 =

38

Page 73: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

8 CHAPTER 5.

0.00

1.0

0.5

1.0

10.5

2 0.0

F(x1,x2)

X1X2

Surface Plot of the cdf F(x1,x2)

Figure 5.2: Cumulative distribution function, F (x1, x2) for Problem 5.5.

(iii) The cumulative distribution function is obtained from:

F (x1, x2) =∫ x2

0

∫ x1

0

f(ξ1, ξ2)dξ1dξ2

which, in this specific case, is:

F (x1, x2) = 3∫ x2

0

∫ x1

0

ξ1ξ2(1− ξ2)dξ1dξ2

= 3∫ x2

0

ξ2(1− ξ2)(∫ x1

0

ξ1dξ1

)dξ2

= 3∫ x2

0

ξ2(1− ξ2)(

x21

2

)dξ2

=3x2

1

2

(x2

2

2− x3

2

3

)

A plot of this cdf is shown in Fig 5.2.

5.6 (i) From the joint pdf given in Exercise 5.5, the marginal pdfs are obtainedas follows:

f1(x1) = 3∫ 1

0

x1x2(1− x2)dx2

= 3x1

∫ 1

0

x2(1− x2)dx2 =x1

2

Thus,

f1(x1) =

x12 ; 0 < x1 < 2;

0; elsewhere

Page 74: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

9

Similarly,

f2(x2) = 3x2(1− x2)∫ 2

0

x1dx1 = 6x2(1− x2)

Thus,

f2(x2) =

6x2(1− x2); 0 < x2 < 1;0; elsewhere

From here, it is straightforward to see that:

f(x1, x2) = f1(x1)f2(x2)

so that X1 and X2 are independent.From the marginal pdfs obtained above, the required marginal means, µX1 , µX2 ,

are determined as follows:

µX1 =∫ 2

0

x1f1(x1)dx1 =12

∫ 2

0

x21dx1 =

43

and,

µX2 =∫ 1

0

x2f2(x2)dx2 = 6∫ 2

0

6x22(1− x2)dx2 =

12

(ii) The conditional pdfs are obtained as follows.

f(x1|x2) =f(x1, x2)f2(x2)

=3x1x2(1− x2)6x2(1− x2)

=x1

2

also:

f(x2|x1) =f(x1, x2)f1(x1)

=3x1x2(1− x2)

x12

= 6x2(1− x2)

5.7 (i) From the given pdf, the marginal pdfs are obtained by summing outx2 (to obtain f1(x1)) and summing out x1 (to obtain f2(x2)). The results areshown in the following table.

X1 → 0 1 2X2 ↓ f2(x2)0 0 0 1/4 1/41 0 1/2 0 1/22 1/4 0 0 1/4f1(x1) 1/4 1/2 1/4 1

From these marginal pdfs, the product, f1(x1)f2(x2), is obtained as in thefollowing table:

Page 75: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

10 CHAPTER 5.

f1(x1)f2(x2)X1 → 0 1 2X2 ↓0 1/16 1/8 1/161 1/8 1/4 1/82 1/16 1/8 1/16

which is not the same as the joint pdf, f(x1, x2); hence, X1 and X2 are notindependent.(ii) The conditional pdfs are obtained as follows:

f(x1|x2) =f(x1, x2)f2(x2)

and, from the indicated pdfs, the result is as shown in the following table:

f(x1|x2)X1 → 0 1 2X2 ↓0 0 0 11 0 1 02 1 0 0

Similarly,

f(x2|x1) =f(x1, x2)f1(x1)

and, again, from the indicated pdfs, the result is as shown in the following table:

f(x2|x1)X1 → 0 1 2X2 ↓0 0 0 11 0 1 02 1 0 0

Observe that:

P (X2 = 2|X1 = 0) = 1 = P (X1 = 2|X2 = 0)P (X2 = 1|X1 = 1) = 1 = P (X1 = 1|X2 = 1)P (X2 = 0|X1 = 2) = 1 = P (X1 = 0|X2 = 2)

With all other probabilities being zero, it appears as if the experiments are suchthat, with absolute certainty (and consistently),

X1 + X2 = 2

(iii) With the random variables X1 and X2 defined respectively as the totalnumber of heads, and the total number of tails obtained when a coin is tossedtwice, we note the following:

Page 76: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

11

(a) each random variable space is 0, 1, 2; and,(b) there are exactly two tosses of the coin involved in each experiment.

Therefore, the resulting outcomes must satisfy the constraint X1 + X2 = 2always: at the end of each experiment, even though the actual outcomes areuncertain, the total number of heads obtained plus the total number of tailsobtained must add up to 2, hence the constraint. Thus, the foregoing resultsare consistent with the stated conjecture.

5.8 From the given pdf, the condition that must be satisfied is:

c

∫ 2

0

∫ 1

0

e−(x1+x2)dx1dx2 = 1

The indicated integrals may then be carried out to give the following result:

c

∫ 2

0

∫ 1

0

e−(x1+x2)dx1dx2 =∫ 2

0

e−x2

(−e−x1

∣∣10

)dx2

=∫ 2

0

e−x2(1− e−1

)dx2

= c(1− e−1

) (1− e−2

)= 1

which, when solved for c, yields the desired result:

c =1

(1− e−1) (1− e−2)

The marginal pdfs are obtained as follows.

f1(x1) = c

∫ 2

0

e−(x1+x2)dx2

which, upon using the result obtained above for c, simplifies to give:

f1(x1) = 1

(1−e−1)e−x1 ; 0 < x1 < 1;

0; elsewhere

Similarly,

f2(x2) = c

∫ 1

0

e−(x1+x2)dx1

which simplifies to yield:

f2(x2) = 1

(1−e−2)e−x2 ; 0 < x2 < 2;

0; elsewhere

We may now observe from here that:

f1(x1)f2(x2) = f(x1, x2)

Page 77: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

12 CHAPTER 5.

indicating that X1 and X2 are independent.

5.9 When the range of validity is changed as indicated, the constant, c, isdetermined to satisfy the condition:

c

∫ ∞

0

∫ ∞

0

e−(x1+x2)dx1dx2 = 1

and, upon evaluating the indicated integrals, we obtain

c = 1

The marginal pdfs are obtained in the usual fashion:

f1(x1) =∫ ∞

0

e−(x1+x2)dx2 = e−x1

andf2(x2) =

∫ ∞

0

e−(x1+x2)dx1 = e−x2

From here, it is clear that, as in Problem 5.8,

f1(x1)f2(x2) = f(x1, x2)

indicating that X1 and X2 are independent under these conditions, also.The joint pdf in this case, is

f(x1, x2) =

e−(x1+x2); 0 < x1 < ∞; 0 < x2 < ∞0; elsewhere

(5.2)

Section 5.35.10 (i) The random variable, U(X1, X2) = X1 + X2, represents the totalnumber of wins and draws. From the pdf obtained in Exercise 5.3, its expectedvalue is obtained as follows:

E(X1 + X2) =2∑

x2=0

2∑x1=0

(x1 + x2)f(x1, x2)

= (0 + 0)f(0, 0) + (1 + 0)f(1, 0) + · · ·+ (2 + 2)f(2, 2)

=49

+69

+29

=43

(ii) The random variable, U(X1, X2) = 3X1 + X2, represents the total num-ber of points awarded to the player, with a minimum of 4 points required forqualification; its expected value is obtained as follows:

E(3X1 + X2) =2∑

x2=0

2∑x1=0

(3x1 + x2)f(x1, x2)

= 0f(0, 0) + 3f(1, 0) + · · ·+ 8f(2, 2)

=129

+109

+29

=83

Page 78: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

13

Thus, the expected total number of points awarded is 223 , which is less than the

required minimum of 4; this player is therefore not expected to qualify.

5.11 (i) The required marginal pdfs are obtained from:

f1(x1) =2∑

x2=0

f(x1, x2); f2(x2) =2∑

x1=0

f(x1, x2)

The specific marginal pdfs for each of the players are shown in the tables below.

For SuzieX1 → 0 1 2X2 ↓ f2(x2)0 0.0025 0.075 0.5625 0.641 0.0200 0.300 0 0.322 0.0400 0 0 0.04f1(x1) 0.0625 0.375 0.5625 1

For MeredithX1 → 0 1 2X2 ↓ f2(x2)0 0.04 0.20 0.25 0.491 0.12 0.30 0 0.422 0.09 0 0 0.09f1(x1) 0.25 0.50 0.25 1

For PaulaX1 → 0 1 2X2 ↓ f2(x2)0 0.25 0.20 0.04 0.491 0.30 0.12 0 0.422 0.09 0 0 0.09f1(x1) 0.64 0.32 0.04 1

The marginal means are obtained from these marginal pdfs as follows:

µX1 =2∑

x1=0

x1f1(x1); µX2 =2∑

x2=0

x2f2(x2)

For Suzie,

µX1 = (0× 0.0625 + 1× 0.375 + 2× 0.5625) = 1.5µX2 = (0× 0.64 + 1× 0.32 + 2× 0.04) = 0.4

indicating that, in 2 games, Suzie is expected to win 1.5 games on average, whileher expected number of draws is 0.4.

Page 79: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

14 CHAPTER 5.

For Meredith,

µX1 = (0× 0.25 + 1× 0.50 + 2× 0.25) = 1.0µX2 = (0× 0.49 + 1× 0.42 + 2× 0.09) = 0.6

indicating that, on average, in 2 games, Meredith will win 1.0 game, and drawin 0.6. And for Paula,

µX1 = (0× 0.64 + 1× 0.32 + 2× 0.04) = 0.4µX2 = (0× 0.49 + 1× 0.42 + 2× 0.009) = 0.6

indicating that, in 2 games, Paula is only expected (on average) to win 0.4games, and draw in 0.6 games.

It is interesting to note that these results could also have been obtaineddirectly from the supplied individual probabilities. Recall that for Suzie, theprobability of winning a single game, pW , is 0.75, so that in 2 games, theexpected number of wins will be 2 × 0.75 = 1.5; the probability of a drawpD, is 0.2, and therefore the expected number of draws in 2 games is 0.4, asobtained above. The same is true for Meredith, (pW = 0.5; pD = 0.3); andPaula, (pW = 0.2; pD = 0.3).(ii) The expectation, E[U(X1, X2) = 3X1 + X2], is the expected total numberof points awarded to each player; and from the original problem definition, aminimum of 4 is required for qualification. This expectation is obtained fromthe joint pdf, f(x1, x2), as follows:

E(3X1 + X2) =2∑

x2=0

2∑x1=0

(3x1 + x2)f(x1, x2)

Upon employing the appropriate pdf for each player, the results are shownbelow. First, for Suzie,

E(3X1 + X2) = (0× 0.0025 + 3× 0.075 + 6× 0.5626)+(1× 0.02 + 4× 0.30 + 7× 0)+(2× 0.04 + 5× 0 + 8× 0)

= 4.9

In similar fashion, we obtain,

E(3X1 + X2) =

3.6; for Meredith1.8; for Paula

Consequently, only Suzie is expected to qualify. It appears, therefore, that this isa tournament meant only for superior players, with the stringent pre-qualifyingconditions designed specifically to weed out all but the truly superior players.

Page 80: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

15

5.12 From the marginal pdfs obtained in Exercise 5.7, we obtain

µX1 =∑

x1f1(x1) =(

0× 14

)+

(1× 1

2

)+

(2× 1

4

)= 1

µX2 =∑

x2f2(x2) =(

0× 14

)+

(1× 1

2

)+

(2× 1

4

)= 1

The covariance is obtained from:

σ12 = E(X1X2)− µX1µX2

and sinceE(X1X2) =

∑x2

∑x1

x1x2f(x1, x2)

we obtain, from the given joint pdf, f(x1, x2), that:

E(X1X2) =12

so that the covariance is:

σ12 =12− 1 = −1

2

Next, the variances are obtained from:

σ21 = E(X1−µX1) =

∑(x1−1)2f1(x1) =

(1× 1

4

)+

(0× 1

2

)+

(1× 1

4

)=

12

and similarly,

σ22 = E(X2 − µX2) =

∑(x2 − 1)2f2(x2) =

12

Hence, the correlation coefficient, ρ, is obtained as:

ρ =−1/2√

1/2√

1/2= −1

with the implication that the two random variables in question, X1 and X2, areperfectly negatively correlated.

5.13 From the given joint pdfs for each player, we are able to obtain the marginalpdfs from

f1(x1) =2∑

x2=0

f(x1, x2); f2(x2) =2∑

x1=0

f(x1, x2)

as follows (also see Exercise 5.11).

Page 81: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

16 CHAPTER 5.

For SuzieX1 → 0 1 2X2 ↓ f2(x2)0 0.0025 0.075 0.5625 0.641 0.0200 0.300 0 0.322 0.0400 0 0 0.04f1(x1) 0.0625 0.375 0.5625 1

For MeredithX1 → 0 1 2X2 ↓ f2(x2)0 0.04 0.20 0.25 0.491 0.12 0.30 0 0.422 0.09 0 0 0.09f1(x1) 0.25 0.50 0.25 1

For PaulaX1 → 0 1 2X2 ↓ f2(x2)0 0.25 0.20 0.04 0.491 0.30 0.12 0 0.422 0.09 0 0 0.09f1(x1) 0.64 0.32 0.04 1

The marginal means are obtained from these marginal pdfs as follows:

µX1 =2∑

x1=0

x1f1(x1); µX2 =2∑

x2=0

x2f2(x2)

to yield:

µX1 =

1.5; for Suzie1.0; for Meredith0.4; for Paula

and,

µX2 =

0.4; for Suzie0.6; for Meredith0.6; for Paula

The variances, σ21 and σ2

2 , are obtained for each player from:

σ2i = E(Xi − µXi)

2 =∑xi

(xi − µXi)2fi(xi); i = 1, 2

Thus, for Suzie, with µX1 = 1.5 and µX2 = 0.4,

σ21 = (1.52 × 0.0625) + (0.52 × 0.375) + (0.52 × 0.5625) = 0.375

Page 82: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

17

andσ2

2 = (0.42 × 0.64) + (0.62 × 0.32) + (1.62 × 0.4) = 0.32

Similarly, for Meredith, with µX1 = 1.0 and µX2 = 0.6,

σ21 = (12 × 0.25) + (0× 0.50) + (12 × 0.25) = 0.50

andσ2

2 = (0.62 × 0.49) + (0.42 × 0.42) + (1.42 × 0.09) = 0.42

and for Paula, with µX1 = 0.4 and µX2 = 0.6,

σ21 = (0.42 × 0.64) + (0.62 × 0.32) + (1.62 × 0.04) = 0.32

andσ2

2 = (0.62 × 0.49) + (0.42 × 0.42) + (1.42 × 0.09) = 0.42

From here, we are now able to calculate the covariances, σ12, from:

σ12 = E(X1X2)− µX1µX2

to yield, for the various players:

σ12 =

−0.30; for Suzie−0.30; for Meredith−0.12; for Paula

and the correlation coefficients, ρ, from

ρ =σ12

σ1σ2

to obtain:

ρ =

−0.866; for Suzie−0.655; for Meredith−0.327; for Paula

The uniformly negative values obtained for the covariances and correlationcoefficients indicate, first, that for all 3 players, the total number of wins and thetotal number of draws are negatively correlated: i.e., a higher number of winstends to occur together with a lower number of draws, and vice versa. Keepin mind that there is a third possible outcome (L, a “loss”), so that if a playerdoes not win a game, the other option is not limited to a “draw.” Hence thetwo variables, X1 and X2, are not (and cannot be) perfectly correlated—unlessthe probability of a loss is always zero, which is not the case here.

Second, the negative correlation between wins and draws is strongest forSuzie, the superior player; it is moderately strong for Meredith, and much lessso for Paula. These values reflect the influence exerted by the third possibleoutcome (a loss), via the probability of its occurrence. The strong correlationbetween wins and draws for Suzie indicates that for her, these two outcomesare the most dominant of the three: i.e., she is far more likely to win or draw

Page 83: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

18 CHAPTER 5.

(almost exclusively) than lose. If the probability of losing were exactly zero, thecorrelation coefficient between X1 and X2 would be exactly −1. The moder-ately strong correlation coefficient for Meredith shows that while the more likelyoutcomes are a win or a draw, the possibility of losing is just high enough todiffuse the correlation between wins and draws. For Paula, the possibility oflosing is sufficiently high to the point of lowering the correlation between winsand draws substantially.

5.14 (i) From the joint pdfs, we obtain the required marginal pdfs as follows:

f1(x) =∫ 1

0

(x + y)dy =(

xy +y2

2

)∣∣∣∣1

0

so that:f1(x) = x +

12

Similarly, we obtain:

f2(y) =∫ 1

0

(x + y)dx = y +12

The conditional pdfs are obtained as follows:

f(x|y) =f(x, y)f2(y)

=2(x + y)2y + 1

and similarly,

f(y|x) =2(x + y)2x + 1

We may now observe that

f(x|y) 6= f1(x); f(y|x) 6= f2(y); and f(x, y) 6= f1(x)f2(y)

hence, X and Y are not independent.(ii) The marginal means and marginal variances required for computing thecovariances and the correlation coefficients, are obtained as follows:

µX =∫ 1

0

xf1(x)dx =∫ 1

0

x

(x +

12

)dx =

712

and by symmetry,

µY =∫ 1

0

yf2(y)dy =712

The variances are obtained as follows:

σ2X =

∫ 1

0

(x− µX)2f1(x)dx =∫ 1

0

(x− 7

12

)(x +

12

)dx

Page 84: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

19

and after a bit of fairly basic calculus and algebra, this simplifies to yield:

σ2X =

11144

and similarly for Y ,

σ2Y =

11144

The covariance is obtained as:

σXY = E(XY )− µXµY

and since

E(XY ) =∫ 1

0

∫ 1

0

xy(x + y)dxdy =13

we therefore obtain:σXY =

13− 49

144= − 1

144from where the correlation coefficient is obtained as:

ρ =−1/144√

11/144√

11/144= − 1

11

indicating slightly negatively correlated random variables.

Application Problems

5.15 (i) First, upon representing the assay results as Y , with two possible out-comes, y1 = A+, and y2 = A−, and the true lithium status as X, also with twopossible outcomes, x1 = L+ and x2 = L−; and subsequently, upon consideringthe relative frequencies as representative of the probabilities, the resulting jointpdf, f(x, y), is shown in the table below. The marginal distributions, f1(x) andf2(y), are obtained by summing over y, and x, respectively, i.e.,

f1(x) =∑

y

f(x, y); f2(y) =∑

x

f(x, y).

X, Toxicity StatusY , Assay Result↓ x1 x2 f2(y)y1 0.200 0.113 0.313y2 0.140 0.547 0.687f1(x) 0.340 0.660 1.000

The event “test result is correct,” consists of two mutually exclusive events:(a) the test method correctly registers a high lithium concentration (Y = y1)

for a patient with confirmed lithium toxicity (X = x1), or,

Page 85: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

20 CHAPTER 5.

(b) the test method correctly registers a low lithium concentration (Y = y2)for a patient with no lithium toxicity (X = x2).

Thus, the required probability, P (R), that the test method produces theright result, is:

P (R) = P (X = x1, Y = y1) + P (X = x2, Y = y2)= 0.200 + 0.547 = 0.747

(ii) From the joint pdf and the marginal pdfs given in the table above, we obtainthe required conditional pdfs as follows.

f(y2|x2) =f(x2, y2)f1(x2)

=0.5470.660

= 0.829

f(y1|x2) =f(x2, y1)f1(x2)

=0.1130.660

= 0.171

f(y2|x1) =f(x1, y2)f1(x1)

=0.1400.340

= 0.412

In words,

• f(y2|x2) is the probability that the test method correctly indicates lowlithium concentrations when used on patients confirmed with no lithiumtoxicity;

• f(y1|x2) is the probability that the test method incorrectly indicates highlithium concentrations when used on patients confirmed with no lithiumtoxicity; and,

• f(y2|x1) is the probability that the test method incorrectly indicates lowlithium concentrations when used on patients with confirmed lithium tox-icity.

5.16 (i) The required probability is P (X2 > X1); it is determined from thegiven joint pdf as follows.

P (X2 > X1) =∫ ∞

0

∫ x2

0

150

e−(0.2x1+0.1x2)dx1dx2

=150

∫ ∞

0

e−0.1x2

(∫ x2

0

e−0.2x1dx1

)dx2

=110

∫ ∞

0

e−0.1x2(1− e−0.2x2)dx2

=23

Page 86: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

21

(ii) The converse probability, P (X1 > X2), is obtained from:

P (X1 > X2) =∫ ∞

0

∫ ∞

x2

150

e−(0.2x1+0.1x2)dx1dx2

=150

∫ ∞

0

e−0.1x2

(∫ ∞

x2

e−0.2x1dx1

)dx2

=110

∫ ∞

0

e−0.1x2e−0.2x2dx2

=13

Of course, this could also have been obtained from the fact that:

P (X1 > X2) = 1− P (X2 > X1)

(iii) The expected (mean) lifetimes for the various components are obtained asfollows.

µX1 =∫ ∞

0

x1f1(x1)dx1; and µX2 =∫ ∞

0

x2f2(x2)dx2

with fi(xi) as the respective marginal pdfs, i = 1, 2.From Example 5.3 in the text, where these marginal pdfs were derived ex-

plicitly, we obtain,

E(X1) = µX1 =15

∫ ∞

0

x1e−0.2x1dx1 = 5

E(X2) = µX2 =110

∫ ∞

0

x2e−0.1x2dx2 = 10

Thus, the expected lifetime of the control valve, µX2 , is 10 years, while thatof the controller hardware electronics, µX1 , is 5 years. The implications aretherefore that one should expect to replace the control valve every 10 years, andthe controller hardware electronics every 5 years.(iv) From the result in (iii) above, we observe that over the next 20 years, oneshould expect to replace the control valve twice (at a cost of $10,000 each time),and to replace the control hardware electronics 4 times (at a cost of $20,000each time). The total cost will then be:

C = (2× 10, 000) + (4× 20, 000) = 100, 000

Thus, $100,000 should be budgeted over the next 20 years for the purpose ofkeeping the control system functioning by replacing a malfunctioning compo-nent every time it fails.

5.17 (i) From the supplied information, and under the stipulated conditions,the joint pdf, f(x, y), is shown in the table below. The marginal pdfs, f1(x)and f2(y), are obtained by summing over y, and summing over x, respectively.

Page 87: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

22 CHAPTER 5.

Y → 1 2 3 4 f1(x)X ↓0 0.06 0.20 0.13 0.10 0.491 0.17 0.14 0.12 0.08 0.51f2(y) 0.23 0.34 0.25 0.18 1.00

(ii) The required probability is the conditional probability, f(y = 3, 4|x = 0); itis obtained as follows:

f(y = 3, 4|x = 0) = f(y = 3|x = 0) + f(y = 4|x = 0)

=f(x = 0, y = 3)

f1(x = 0)+

f(x = 0, y = 4)f1(x = 0)

=0.130.49

+0.100.49

= 0.469

(iii) The required expected value is obtained as:

E(C) = 1500 + E(500X − 100Y ) = 1500 +1∑

x=0

4∑y=1

(500x− 100y)f(x, y)

which, upon introducing the joint pdf and appropriate values for x and y, sim-plifies to yield:

E(C) = 1500 + 27 = 1527

Thus, the company should expect to spend, on average, $1527 per worker everyyear.

5.18 (i) It is not necessary to consider X3 in this joint pdf because of theconstraint:

X1 + X2 + X3 = 5

so that once X1 and X2 are given, X3 follows automatically.(ii) From the given pdf,

f(x1, x2) =120

x1!x2!(5− x1 − x2)!0.85x10.05x20.15−x1−x2

for x1 = 0, 1, 2, . . . , 5, and x2 = 0, 1, 2, . . . , 5, we obtain the following table forthe joint pdf as well as the marginal pdfs, the latter having been obtained bysumming over the appropriate variable. (Note that events for which X1+X2 > 5are entirely impossible; accordingly, the probabilities associated with them arezero.)

Page 88: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

23

X1 → 0 1 2 3 4 5 f2(x2)X2 ↓0 0.0000 0.0004 0.0072 0.0614 0.2610 0.4437 0.7741 0.0000 0.0009 0.0108 0.0614 0.1305 0 0.2042 0.0000 0.0006 0.0054 0.0154 0 0 0.0213 0.0000 0.0002 0.0009 0 0 0 0.0014 0.0000 0.0000 0 0 0 0 0.0005 0.0000 0 0 0 0 0 0.000f1(x1) 0.000 0.002 0.024 0.138 0.392 0.444 1.000

This joint pdf is plotted in Fig 5.3, where it is seen that most of the “activity”is localized to the region where X1 ≥ 3 and X2 ≤ 2.

0.005

0.15

4

0.30

5

0.45

3 4

2 32

11

00

f(x1,x2)

X2X1

Joint pdf for Problem 5.18

Figure 5.3: Joint probability distribution function, f(x1, x2), for Problem 5.18.

It is possible to generate a 6 × 6 table of the product, f1(x1)f2(x2), andcompare this term-by-term to the table shown above for the joint pdf, f(x1, x2),in order to determine whether or not the two pdfs are the same. However, it issufficient to note, for example, that while f1(x1 = 5) = 0.444, and f2(x2 = 1) =0.204, so that the product, f1(x1 = 5)f2(x2 = 1) = 0.091, the joint probability,f(x1 = 5, x2 = 1), is exactly equal to zero, because the outcome X1 = 5 jointlywith X2 = 1, cannot occur. And if f(x1, x2) 6= f1(x1)f2(x2) at a single point,then the two pdfs cannot be equal at all. Hence, the random variables X1 andX2 are not independent.(iii) The required expected values are obtained from the marginal pdfs as follows.

E(X1) =5∑

x1=0

x1f1(x1) = (0.002 + 0.048 + 0.414 + 1.568 + 2.22) = 4.252

Page 89: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

24 CHAPTER 5.

Similarly,

E(X2) =5∑

x2=0

x2f2(x2) = (0.204 + 0.042 + 0.003) = 0.249

Thus, the expected number of correct results, regardless of the other results, is4.25; the expected value of false positives (again, regardless of other results), is0.25.

Note that these results could also have been obtained directly from the givenindividual probabilities, 0.85 for correct results, and 0.05 for false positives.In five repetitions, we would “expect” 5 × 0.85 = 4.25 correct results, and5× 0.05 = 0.25 false positives.(iv) The required expected value, E(X1 + X2), is obtained as follows.

E(X1 + X2) =5∑

x2=0

5∑x1=0

(x1 + x2)f(x1, x2)

= 3.4615 + 0.9323 + 0.1166 + 0.0053= 4.5157

which is slightly larger than E(X1) + E(X2) = 4.501.These values are different simply because the two random variables are not

independent; the values would be identical for independent random variables.

Page 90: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 6

Exercises

6.1 (i) From the given transformation,

Y =1X

we obtain the inverse transformation:

X =1Y

so that from the pdf, f(x), we obtain the required pdf as:

fY (y) = p(1− p)1y−1; y = 1,

12,13, . . . , 0

(ii) By definition,

E(Y ) =∑

y

fY (y) =∑

y

yp(1− p)1y−1

or, with a more convenient change of variables back to the original x,

E(Y ) = p

∞∑x=1

1x

(1− p)x−1

If we now let q = (1− p), we obtain:

E(Y ) =p

q

∞∑x=1

1x

qx (6.1)

Now, by defining the infinite sum indicated above as S, i.e.,

S =∞∑

x=1

1x

qx (6.2)

1

Page 91: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 6.

we may then observe that upon differentiating once with respect to q, the resultis:

dS

dq=

∞∑x=1

qx−1 =1

1− q

Thus, the infinite sum S in Eq (6.2) satisfies the differential equation:

dS

dq=

11− q

which is easily solved to yield:

S = − ln(1− q)

When this is substituted into Eq (6.1), the result is:

E(Y ) =p

qln

(1

1− q

)=

p

(1− p)ln

(1p

)

Thus, while E(X) = 1p , E(Y ) = E( 1

X ) is not p, but a more complicated function.Therefore, as is true with many nonlinear transformations,

E

(1X

)6= 1

E(X)

6.2 To establish that, for the random variable, Y , whose pdf is given by

fY (y) =e−(2θ)yθ

(log2 y)!; y = 1, 2, 4, 8, . . .

the expected value, E(Y ), is given by:

E(Y ) = eλ

we begin from the definition of the expectation:

E(Y ) =∑

y

yfY (y)

and from the given pdf, obtain:

∑y

yfY (y) = e−(2θ)∑

y

yyθ

(log2 y)!

= e−(2θ)

[1 +

212θ

1+

2222θ

2!+

2323θ

3!++ . . .

]

Page 92: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

And since, from Eq (6.17) in the main text,

θ = log2 λ

so that λ = 2θ, we have,

∑y

yfY (y) = e−λ

[1 +

21λ

1+

22λ2

2!+

23λ3

3!++ . . .

](6.3)

= e−λ

[1 +

(2λ)1

+(2λ)2

2!+

(2λ)3

3!++ . . .

]

= e−λe2λ = eλ

as required. Alternatively, once can go directly from

Y = 2X

so that

E(Y ) = E(2X) =∑

x

2x λxe−λ

x!

or, upon consolidation:

E(Y ) = e−λ∑

x

(2λ)x

x!

as above in Eq (6.3), with the rest of the result therefore following immediately.

6.3 From the given transformation,

Y =1β

e−X/β

we obtain the inverse transformation:

x = ψ(y) = −β ln(βy); 0 < y <1β

(6.4)

so that the Jacobian of the transformation is obtained as:

J = ψ′(y) = −β

y

From here, the pdf, fY (y), for the transformed variable is obtained as:

fY (y) = fX(ψ(y))|J | = 1β

e(ln βy)

y

)

which simplifies to:

fY (y) = β; 0 < y <1β

Page 93: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 6.

This is clearly the reverse of the transformation in Example 6.2, up to themultiplicative constant, β. In Example 6.2, X was a uniform random variable,and the logarithmic transformation, Y = − ln X, along with the exponentialinverse transformation given in Eq (6.31) in the text, produced the exponentialrandom variable, Y . Here, the reverse is the case: X is the exponential randomvariable, and the exponential transformation of Eq (6.122), along with the log-arithmic inverse transformation in Eq (6.4) above (reminiscent of Eq (6.29) inthe main text), produced the uniform random variable, Y .

6.4 (i) The transformation,Y = X2

maps the space VX = −1 < x < 1 onto VY = 0 < y < 1; it is not one-to-onebecause for all y > 0, there are two values of x corresponding to each value ofy. The inverse transformation:

x = ±√y

has 2 roots for x:

x1 = ψ1(y) =√

y

x2 = ψ2(y) = −√y

so that the corresponding Jacobians are

J1 =y−1/2

2and J2 = −y−1/2

2The pdf for the transformed variable is therefore obtained as:

fY (y) = fX(√

y)y−1/2

2+ fX(−√y)

y−1/2

2

=12(√

y + 1)1

2√

y+

12(−√y + 1)

12√

y

=1

2√

y

Thus, the required pdf is obtained as:

fY (y) = 1

2√

y ; 0 < y < 10; elsewhere

(6.5)

As a check, observe that∫ 1

0

12√

ydy = y1/2

∣∣∣1

0= 1

(ii) The required expectations are obtained as:

E(X) =12

∫ 1

−1

x(x + 1)dx =12

(x3

3+

x2

2

∣∣∣∣1

−1

)=

13

Page 94: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

and

E(Y ) =12

∫ 1

−1

y√ydy =

12

(2y3/2

3

∣∣∣∣1

0

)=

13

so that in this particular case, E(X) = E(Y ) = 1/3.

6.5 (i) From the result given in Eq (6.54) in the main text, we know that thecharacteristic function of the random variable sum, Y = X1 + X2, given thoseof the contributing random variables, is:

ϕY (t) = ϕX1(t)ϕX2(t)

In this specific case, with the contributing ϕXi(t) as given in Eq (6.116), theresult is:

ϕY (t) = e[λ1(ejt−1)]e[λ2(e

jt−1)] = e[(λ1+λ2)(ejt−1)]

which is similar to the cf in Eq (6.116), except that λi has been replaced by(λ1 + λ2). Thus, by analogy (upon comparing this expression to the pdf-cf pairin Eqs (6.115) and (6.116)), we deduce that the pdf, fY (y), which correspondsto the cf, ϕY (t) obtained here, is:

fY (y) =e−(λ1+λ2)(λ1 + λ2)y

y!; y = 0, 1, 2, . . .

(ii) Define λ∗n as the sum:

λ∗n =n∑

i=1

λi

then, ϕY (t), the cf of the random variable sum:

Y = X1 + X2 + · · ·+ Xn

where the cf of each contributing random variable, Xi, is as given in Eq (6.116),is obtained as:

ϕY (t) =n∏

i=1

ϕXi(t) = eλ∗n[(ejt−1)]

The corresponding pdf is therefore given by:

fY (y) =e−λ∗n(λ∗n)y

y!; y = 0, 1, 2, . . . (6.6)

By observing now that the pdf of this random variable sum is exactly of thesame form as that of the individual random variables contributing to the sum,we conclude that the random variable, X, possesses the “reproductive property”illustrated in Example 6.6 in the text.(iii) Let us represent the new random variable, Z, as:

Z =1n

Y ; z = 0, 1/n, 2/n, . . .

Page 95: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 6.

where Y is the random variable sum in part (ii) above, whose pdf was obtainedin Eq (6.6) above. The objective now is to obtain fZ(z) from here.

First, we observe that the inverse transformation is:

Y = nZ

so that, from Eq (6.6), we obtain:

fZ(z) =e−λ∗n(λ∗n)nz

(nz)!; z = 0, 1/n, 2/n, . . .

6.6 The given random variable sum can equally well be written as:

Z = (Y1 + Y2 + · · ·+ Yn)

where Yi = X2i . And now, from the cf, ϕY (t), given in Eq (6.119) in the text,

we immediately obtain that:

ϕZ(t) =r∏

i=1

ϕYi(t) =1

(1− j2t)r/2

since the random variables, Xi, (and hence, Yi) are all mutually stochasticallyindependent. By comparing this cf to that given in Eq (6.119) for Y , we notethat the only (but very important) difference is that the exponent 1/2 in Eq(6.119) has now been replaced by r/2. By rewriting the pdf, fY (y), as follows:

fY (y) =1

21/2Γ(1/2)e−y/2y( 1

2−1); 0 < y < ∞

(note in particular the subtle, but important, point of how the exponent of y,−1/2, has been rewritten as ( 1

2 − 1)), or by direct inversion of the cf, ϕZ(t), weobtain the required pdf as:

fZ(z) =1

2r/2Γ(r/2)e−z/2z( r

2−1); 0 < z < ∞

6.7 In this case, the bivariate transformation will now be:

Y1 = X1 + X2

Y2 = X2

with the inverse transformation easily obtained as:

x1 = y1 − y2

x2 = y2

so that the Jacobian of the transformation is:

J =∣∣∣∣

1 −10 1

∣∣∣∣ ⇒ |J | = 1

Page 96: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

7

By independence, the joint pdf for X1 and X2 is given by:

fX(x1, x2) =12π

e−

(x21+x2

22

)

;−∞ < x1 < ∞;−∞ < x2 < ∞

and from the inverse transformation above, the joint pdf for Y1 and Y2 is ob-tained as:

fY(y1, y2) =12π

e−

[(y1−y2)2+y2

22

]

; −∞ < y1 < ∞;−∞ < y2 < ∞

If we now integrate out y2, we obtain the required marginal pdf for y1 as:

f1(y1) =12π

e−y21/2

∫ ∞

∞e(y1y2−y2

2)dy2

=12π

e−y21/2ey2

1/4√

π

which yields:

f1(y1) =1

2√

πe−

y214 ; −∞ < y1 < ∞

precisely as obtained in Example 6.8.

6.8 If, instead of the squaring transformation of Eq (6.100) in the text, we usethe one given in Eq (6.122), the resulting bivariate transformation will be:

Y1 =X1

X2

Y2 = X1

with the inverse transformation,

x1 = y2

x2 =y2

y1

The Jacobian, in this case is:

J =∣∣∣∣

0 1− y2

y21

1y1

∣∣∣∣ =y2

y21

(6.7)

so that:

|J | = |y2|y21

still vanishes at the single point y2 = 0, and is undefined for y1 = 0. Once more,by independence, the joint pdf for X1 and X2 is given by:

fX(x1, x2) =12π

e−

(x21+x2

22

)

;−∞ < x1 < ∞;−∞ < x2 < ∞

Page 97: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

8 CHAPTER 6.

and from the inverse transformation above, the joint pdf for Y1 and Y2 is ob-tained as:

fY(y1, y2) =12π

|y2|y21

e−

(y21y2

2+y22

2y21

)

; −∞ < y1 < 0; 0 < y1 < ∞;

−∞ < y2 < 0; 0 < y2 < ∞

The marginal pdf for y1 may now be obtained by integrating out y2, payingattention to the fact that in the interval, (−∞, 0), |y2| = −y2, while in theinterval, (−∞, 0), |y2| = y2; the result is:

f1(y1) =12π

∫ 0

−∞−y2

y21

e− (y2

1+1)y22

2y21 dy2 +

∫ ∞

0

y2

y21

e− (y2

1+1)y22

2y21 dy2

Now, let

C =(

y21 + 1y21

)

in which case, we obtain

f1(y1) =12π

1y21

[∫ 0

−∞−y2e

−Cy22

2 dy2 +∫ ∞

0

y2e−Cy2

22 dy2

]

which, because,

∫ 0

−∞−y2e

−Cy22

2 dy2 =1C

; and∫ ∞

0

y2e−Cy2

22 dy2 =

1C

simplifies to:

f1(y1) =1π

[1

(1 + y21)

];−∞ < y1 < ∞

as obtained in the text.The transformation in Eq (6.100) is somewhat easier to use primarily be-

cause the resulting Jacobian is less complicated.

6.9 If |y2| is replaced by y2 in Eq (6.104) in the text, the resulting integral is:

I =12π

∫ ∞

−∞y2e

− (y21+1)y2

22 dy2

=12π

∫ ∞

−∞y2e

− ky22

2 dy2

=12π

(− 1

2k

)e−

ky22

2

∣∣∣∣∞

−∞= 0

Page 98: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

9

which, of course, is not the pdf obtained in Eq (6.106). The reason for thisproblem is that the integral,

∫ 0

−∞y2e

− (y21+1)y2

22 dy2 = − 1

π

[1

(1 + y21)

]

while the symmetric counterpart,

∫ ∞

0

y2e− (y2

1+1)y22

2 dy2 =1π

[1

(1 + y21)

]

so that without the introduction of −y2 in the former integral, the two con-tributing integral halves will cancel out. Correctly using the absolute value ofthe Jacobian is therefore crucial.

Application Problems

6.10 First, define the random variable, Z, as

Z =(

X − 350σi

)

with the inverse transformation,

x = σiz + 350

so that the Jacobian of the transformation will be:

J = σi

Then, from the given f(x), we obtain:

fZ(z) =1√2π

exp−z2

2

Now, from this pdf, and the fact that, in terms of Z, Eq (6.124) for the “relativethickness variability” is given as:

Y = Z2

we may now invoke the results from Example 6.3 directly to obtain the requiredpdf, fY (y), as:

fY (y) =1√2π

e−y/2y−1/2

Page 99: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

10 CHAPTER 6.

6.11 The condition to be satisfied for the pdf, f(θ), is:∫ π

2

−π2

cdθ = c θ|π2−π

2

= πc = 1

which is solved to yield:

c =1π

as required.Next, from the geometry of the problem, we observe that the relationship

between the distance, y, and the launch angle, θ, is:

y = tan θ; −∞ < y < ∞From here, we obtain the inverse transformation as:

θ = tan−1 y

The Jacobian of this transformation is:

J =∂θ

∂y=

11 + y2

so that the required pdf is obtained as:

fY (y) =1π

[1

(1 + y2)

];−∞ < y < ∞

6.12 (i) From the transformation in Eq (6.127), with the inverse transformation,

x =y

5

so that the Jacobian of the transformation is:

J =15

we obtain the required pdf, fY (y), as:

fY (y) =15τ

e−y/5τ ; 0 < y < ∞ (6.8)

which is of the same form as the pdf for the single CSTR in Eq (6.126), butwith the single CSTR pdf parameter, τ , replaced with 5τ .(ii) It is best to use the characteristic function approach. In this case, the cfcorresponding to the pdf in Eq (6.126) is (from Examples 6.4 and 6.5):

ϕXi(t) =1

(1− jτit)

Page 100: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

11

As such, ϕZ(t), for the random variable sum in Eq (6.128) for the ensemble of5 CSTRs in series, is given by:

ϕZ(t) =5∏

i=1

ϕXi(t) =5∏

i=1

1(1− jτit)

Not much else can be said about the corresponding pdf, fZ(z), but if τ1 = τ2 =· · · = τ5 = τ , then:

ϕZ(t) =1

(1− jτt)5

and from Example 6.6, we are able to determine that the pdf corresponding tothis cf is:

fZ(z) =1

τ5Γ(5)e−z/τz4; 0 < z < ∞ (6.9)

(iii) The pdf for the ensemble of 5 reactors in series with identical values ofthe parameters τi is shown in Eq (6.9), which is clearly not the same as theexpression for fY (y) in Eq (6.8). These pdfs are plotted in Fig 6.1 for a genericvalue of τ = 1.

50403020100

0.20

0.15

0.10

0.05

0.00

y, z

De

nsit

y

Figure 6.1: Probability distribution functions for the residence time in (i) a single CSTRwith mean residence time 5τ = 5 (Eq (6.8), solid line); and (ii) 5 identical CSTRs in series,each with mean residence time τ = 1 (Eq (6.9), dashed line). Note that the two pdfs arequite different.

6.13 Let X1 be the random variable representing the total number of flaws foundon the driver/passenger doors; X2, the flaws found on the midsection doors; andX3, the flaws found on the rear trunk/tailgate doors. These random variablesall have the same pdf shown in Eq (6.129), but with different parameters, given

Page 101: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

12 CHAPTER 6.

as λ1 = 0.5, λ2 = 0.75 and λ3 = 1.0, respectively. The required pdf is forthe composite random variable, Y , the total number of flaws on the completelyassembled minivan. Observe therefore that:

Y = X1 + X2 + X3

It is best to use the characteristic function approach for this problem. FromExercise 6.5 above, we observe that the cf corresponding to the pdf in Eq (6.129)is:

ϕXi(t) = e[λi(e

jt−1)],

for each of the three random variables in question. Thus, from the result givenin Eq (6.54) in the main text, we know that the cf of the random variable sumY is, in this case:

ϕY (t) = ϕX1(t)ϕX2(t)ϕX3(t)

= e[(λ1+λ2+λ3)(ejt−1)]

We now deduce immediately that the corresponding pdf, fY (y), is given by:

fY (y) =e−(λ1+λ2+λ3)(λ1 + λ2 + λ3)y

y!; y = 0, 1, 2, . . .

so that, from the supplied value for the parameters, λi; i = 1, 2, 3, we obtain,finally:

fY (y) =e−2.25(2.25)y

y!; y = 0, 1, 2, . . .

From here, the required probability of assembling a minivan with more than atotal number of 2 flaws on all its doors, is determined by evaluating P (Y > 2)from fY (y); i.e.,

P (Y > 2) = 1− P (Y ≤ 2) = 1− (fY (0) + fY (1) + fY (2))= 1− (0.105 + 0.237 + 0.267) = 0.391

6.14 This non-square transformation can be made square by introducing anyconvenient additional “squaring transformation.” We choose Y2 = X2, so thatthe complete bivariate transformation is now:

Y1 =X1

X2

Y2 = X2

The corresponding inverse transformation is:

x1 = y1y2

x2 = y2

Page 102: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

13

so that the Jacobian of the transformation is:

J =∣∣∣∣

y2 y1

0 1

∣∣∣∣ = y2

and, therefore, |J | = |y2|.Now, by independence, the joint pdf for X1 and X2 is:

fX(x1, x2) =1

Γ(α)Γ(β)xα−1

1 xβ−12 e−x1e−x2 ; 0 < x1 < ∞; 0 < x2 < ∞

From here, and the from the inverse transformation, we obtain the joint pdf forY1 and Y2 as:

fY(y1, y2) =1

Γ(α)Γ(β)(y1y2)α−1yβ−1

2 y2e−y1y2e−y2 ;

=1

Γ(α)Γ(β)yα−11 yα+β−1

2 e−y2(y1+1)

We may now integrate out the “extraneous” variable, y2, to obtain the desiredmarginal pdf, f1(y1), as follows:

f1(y1) =1

Γ(α)Γ(β)yα−11

∫ ∞

0

yα+β−12 e−y2(y1+1)dy2

If we now let C = (1 + y), and introduce a new variable,

z = Cy2

so that1C

dz = dy2

then:f1(y1) =

1Γ(α)Γ(β)

yα−11

∫ ∞

0

1Cα+β−1

zα+β−1e−z 1C

dz

which simplifies to:

f1(y1) =1

Γ(α)Γ(β)yα−11

1Cα+β

∫ ∞

0

zα+β−1e−zdz

and since the surviving integral term is the definition of the Gamma function,Γ(α + β), we obtain finally that:

f1(y1) =Γ(α + β)Γ(α)Γ(β)

yα−11

(1 + y1)α+β

as required.

Page 103: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

14 CHAPTER 6.

6.15 (i) Since the expectation operator, E(.), is a linear operator, then from Eq(6.134), we obtain immediately,

E(X) = 0.4E(V ) + 100 = 0.4µV + 100

as required. And since V ar(X) = E(X − µX)2, with µX = E(X) obtained asabove, we note that

(X − µX)2 = [(0.4V + 100)− (0.4µV + 100)]2

= 0.16(V − µV )2

where, upon taking expectations, we obtain:

σ2X = V ar(X) = 0.16σ2

V

(ii) If the expression in Eq (6.134) is considered as a transformation from V toX, then the inverse transformation is:

v = 2.5(x− 100)

so that |J | = 2.5. Then, from this and the pdf in Eq (6.135), the required pdf,fX(x), is therefore given by:

fX(x) =1

σX

√2π

exp−(x− µX)2

2σ2X

6.16 The Taylor series approximation of Eq (6.138) is:

Q0 ≈ Q∗0 +2104

(M∗)−1/4(M −M∗) (6.10)

And, given the mean mass, M∗ = 75, it follows from Eq (6.138), that thecorresponding value for the resting metabolic rate, Q∗0, is:

Q∗0 = 70× (75)3/4 = 1784.00

Thus, Eq (6.10) becomes:

Q0 ≈ 1784.00 + 17.84(M − 75)

From here, since M∗ = 75 is E(M), we obtain:

E(Q0) ≈ 1784.00

and also:V ar(Q0) ≈ (17.84)2 × V ar(M) = 3978.32

Page 104: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

Chapter 8

Exercises

8.1 By definition, the variance of any random variable, X, is:

¾2 = E[(X − ¹)2]

In the case of the Bernoulli random variable, with ¹ = p (see Eq (8.11)),

¾2 =

1∑x=0

(x− p)2f(x)

= p2f(0) + (1− p)2f(1)

= p2(1− p) + (1− p)2p

= p(1− p)

as required. Similarly, the MGF is obtained as follows:

MX(t) = E(etX

)=

1∑x=0

etXf(x)

= f(0) + etf(1)

= (1− p) + pet

as required. Finally, for the characteristic function,

'X(t) = E(ejtX

)=

1∑x=0

ejtXf(x)

= (1− p) + pejt

as required.

8.2 (i) The required pdf is given in the table below with the plot in the accom-panying Fig 8.1.

1

Page 105: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

2 CHAPTER 8.

x f(x)0 0.2221 0.5562 0.2223 0.0004 0.0005 0.000

TOTAL 1.000

543210

0.6

0.5

0.4

0.3

0.2

0.1

0.0

X

f(x)

Distribution PlotHypergeometric, N=10, Nd=2, n=5

Figure 8.1: The hypergeometric random variable pdf, with n = 5, Nd = 2, and N = 10.

(ii) P (X > 1) = 1−P (X ≤ 1) = 0.222 (or, in this specific case, this is the sameas P (X = 2)). Also, P (X < 2) = f(0) + f(1) = 0.778.(iii) P (1 ≤ X ≤ 3) = f(1) + f(2) + f(3) = 0.778

8.3 The random variable in question is hypergeometric, with N = 100, Nd =5, n = 10; the required probability, P (X = 0), is obtained as:

f(0) = 0.584

Thus, the probability of accepting the entire crate under the indicated samplingplan is almost 0.6. If the sample size is increased to 20 (from 10), (i.e., n = 20),the result is:

f(0) = 0.319

and the probability of acceptance is reduced by more than 45%.

Page 106: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

3

8.4 For the binomial random variable,

E(X) =

n∑x=0

xf(x) =

n∑x=0

xn!

x!(n− x)!px(1− p)n−x

and, because there is no contribution to the indicated sum when x = 0, thisreduces to:

E(X) =

n∑x=1

xn!

x!(n− x)!px(1− p)n−x

=

n∑x=1

n!

(x− 1)!(n− x)!px(1− p)n−x

=

n∑x=1

n(n− 1)!

(x− 1)!(n− x)!ppx−1(1− p)n−x

Now, first, by letting y = x− 1, we obtain from here:

E(X) = np

n−1∑y=0

(n− 1)!

y!(n− y − 1)!py(1− p)n−y−1

and finally, by letting m = n− 1, we obtain:

E(X) = np

m∑y=0

m!

y!(m− y)!py(1− p)m−y

= np (8.1)

because the term under the sum is precisely the Binomial pdf, so that theindicated sum is identical to 1, hence the result.

For the binomial random variable variance, we begin from the fact that:

V ar(X) = E[(X − ¹)2] = E(X2)− ¹2 = E(X2)− n2p2 (8.2)

In this case, we have that:

E(X2) =

n∑x=0

x2n!

x!(n− x)!px(1− p)n−x

=

n∑x=1

xn(n− 1)!

(x− 1)!(n− x)!ppx−1(1− p)n−x

= np

m∑y=0

(y + 1)m!

y!(m− y)!py(1− p)m−y (8.3)

where we have made use of the same variable changes employed in determiningE(X) above.

Page 107: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

4 CHAPTER 8.

Observe that Eq (8.3) may now be expanded into the following two terms:

E(X2) = np

[m∑

y=0

ym!

y!(m− y)!py(1− p)m−y

]+

[m∑

y=0

m!

y!(m− y)!py(1− p)m−y

]

And now, from earlier results, we recognize the first term as the expected valueof the binomial random variable, Y , with parameters m, p (i.e., it is equal tomp); the second sum is 1; so that the equation simplifies to give:

E(X2) = np(mp+ 1) = np[(n− 1)p+ 1]

Upon substituting this into Eq 8.2, we obtain, upon further simplification, thedesired result:

V ar(X) = np(1− p)

8.5 The characteristic function for the Bernoulli random variable, Xi, is:

'i(t) = [pejt

+ (1− p)]

For the indicated random variable sum, therefore, the corresponding character-istic function, 'X(t), will be given by:

'X(t) =

n∏

i=1

[pejt

+ (1− p)] = [pejt

+ (1− p)]n

which is precisely the characteristic function for the binomial random variable,establishing the required result.

8.6 The computed hypergeometric pdf, fH(x), and the binomial counterpart,fB(x), are shown in the table below and plotted in Fig 8.2, from where one isable to see the similarities. In the limit as n → ∞, the two pdfs will coincideprovided Nd/N = p, as is the case here.

x fH(x) fB(x)0 0.016 0.0561 0.135 0.1882 0.348 0.2823 0.348 0.2504 0.135 0.1465 0.016 0.0586 0.000 0.0167 0.000 0.0038 0.000 0.0009 0.000 0.00010 0.000 0.000

Page 108: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

5

1086420

0.4

0.3

0.2

0.1

0.0

X

f(x)

Hypergeometric

Binomial(10,0.25)

Variable

Figure 8.2: The hypergeometric random variable pdf, with n = 10, Nd = 5, and N = 20(solid line, circles), and the binomial pdf, with n = 10, p = 0.25 (dashed line, squares).

8.7 From the binomial pdf

f(x) =n!

x!(n− x)!px(1− p)n−x

we obtain that:

f(x+ 1) =n!

(x+ 1)!(n− x− 1)!px+1(1− p)n−x−1

=n!

(x+ 1)x! (n−x)!(n−x)

ppx(1− p)−1(1− p)n−x

=

(n− x

x+ 1

)(p

1− p

)n!

x!(n− x)!px(1− p)n−x

from which we immediately obtain:

f(x+ 1) =

(n− x

x+ 1

)(p

1− p

)f(x) (8.4)

thereby establishing that:

½(n, x, p) =

(n− x

x+ 1

)(p

1− p

)

Now, because x is not a continuous variable, one cannot “differentiate” f(x) inorder to determine x∗, the value at which a maximum is achieved; however, onecan use the finite difference, which, from the result above, is given as:

f(x+ 1)− f(x) =

[(n− x

x+ 1

)(p

1− p

)− 1

]f(x)

Page 109: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

6 CHAPTER 8.

We observe from this expression that at the “turning” point, f(x∗ + 1)− f(x∗)will be zero, i.e., [(

n− x

x+ 1

)(p

1− p

)− 1

]= 0

which requires that:

p(n− x)− (1− p)(x+ 1)

(1− p)(x+ 1)= 0

When solved for x, the result is:

x∗ = (n+ 1)p− 1 (8.5)

To confirm that this is a maximum, observe that (a) when x < x∗, two imme-diate implications, from Eq (8.5), are that

x+ 1 < (n+ 1)p

and also that:

(n− x) > (n− x∗) = (n+ 1)(1− p)

or, alternatively:

(n+ 1)(1− p) < (n− x)

Since all the quantities involved are positive, these two results combine to yield:

(x+ 1)(1− p) < p(n− x)

so that, from Eq (8.4),

f(x+ 1) > f(x)

(b) When x > x∗, it is easy to see that the opposite is the case, with f(x) <f(x+ 1). Therefore, x∗ as given in Eq (8.5) is a true maximum.

Finally, note that because f(x∗ +1)− f(x∗) must be zero at this maximum,this means that:

f(x∗ + 1) = f(x∗)

with the implication that if x∗ is an integer, (so that (x∗+1) is also an integer),then the pdf, f(x), achieves a maximum at both x∗ and x∗ + 1. (For example,for n = 4, and p = 0.2, x∗ = 0 and 1 are the two values at which the binomialpdf attains a maximum; similarly, for n = 7, and p = 0.5, the binomial pdfachieves its maximum at x∗ = 3 and 4.)

8.8 The required conditional pdfs are obtained from the trinomial pdf by defi-nition as follows:

f(x1∣x2) =f(x1, x2)

f2(x2); f(x2∣x1) =

f(x1, x2)

f1(x1)

Page 110: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

7

where f1(x1) and f2(x2) are marginal distributions of the random variables x1

and x2 respectively, and f(x1, x2) is the joint pdf:

f(x1, x2) =n!

x1!x2!(n− x1 − x2)!px11 px2

2 (1− p1 − p2)n−x1−x2

From the results presented in the text in Eqs (8.44) and (8.55), we know thatthe marginal distributions are given as follows (each being the pdf of a binomialrandom variable with parameters (n, p1) for X1, and (n, p2) for X2):

f1(x1) =n!

x1!(n− x1)!px11 (1− p1)

n−x1

f2(x2) =n!

x2!(n− x2)!px22 (1− p2)

n−x2

As such, the required conditional pdfs are obtained immediately as follows:

f(x1∣x2) =(n− x2)!

x1!(n− x1 − x2)!

px11

(1− p2)n−x2(1− p1 − p2)

n−x1−x2

f(x2∣x1) =(n− x1)!

x2!(n− x1 − x2)!

px22

(1− p1)n−x1(1− p1 − p2)

n−x1−x2

8.9 This two-dimensional random variable is clearly trinomial; the joint pdf inquestion is therefore given as:

f(x1, x2) =n!

x1!x2!(n− x1 − x2)!0.75x10.2x20.05n−x1−x2

valid for x1 = 0, 1, 2, and x2 = 0, 1, 2; it is zero otherwise.The desired joint pdf computed for specific values of the (x1, x2) ordered

pair is shown in the table below. The marginal pdfs, obtained from sums acrossappropriate rows and down appropriate columns of the joint pdf table are alsoshown.

f(x1, x2)

X1 → 0 1 2X2 ↓ f2(x2)0 0.0025 0.075 0.5625 0.641 0.0200 0.300 0 0.322 0.0400 0 0 0.04f1(x1) 0.0625 0.375 0.5625 1

From here, the required conditional pdfs are computed according to:

f(x1∣x2) =f(x1, x2)

f2(x2); f(x2∣x1) =

f(x1, x2)

f1(x1)

to yield the following results: first, for f(x1∣x2),

Page 111: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

8 CHAPTER 8.

f(x1∣x2)

X1 → 0 1 2X2 ↓0 0.0039 0.1172 0.8789 1.00001 0.0625 0.9375 0 1.00002 1.0000 0 0 1.0000

and for f(x2∣x1),

f(x2∣x1)

X1 → 0 1 2X2 ↓0 0.04 0.20 1.001 0.32 0.80 02 0.64 0 0

1.00 1.00 1.00

8.10 (i) To establish the first equivalence (between Eq (8.51) and Eq (8.52))requires showing that

(x+ k − 1

k − 1

)=

(x+ k − 1

x

)

Observe that, by definition,

(i

j

)=

i!

j!(i− j)!

so that:(x+ k − 1

k − 1

)=

(x+ k − 1)!

(k − 1)!x!=

(x+ k − 1)!

x!(k − 1)!=

(x+ k − 1

x

)

as required.Next, when ® is an integer, we know from Eq (8.56) that Γ(®) = (® − 1)!;

as a result, when written in terms of factorials, Eq (8.52) is:

f(x) =(x+ k − 1)!

x!(k − 1)!pk(1− p)x

in terms of the Gamma function, this then becomes:

f(x) =Γ(x+ k)

Γ(k)x!pk(1− p)x

as in Eq (8.53).(ii) If X is now defined as the total number of trials required to obtain exactly ksuccesses, then to observe exactly k successes, the following events would have to

Page 112: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

9

happen: (a) obtain (x−k) failures and (k−1) successes in the first (x−1) trials;and (b) obtain a success on the xtℎ trial. As such, under these circumstances,

P (X = x) =

(x− 1

k − 1

)pk−1(1− p)x−k × p =

(x− 1

k − 1

)pk(1− p)x−k

is the appropriate probability model.A comparison of this pdf with that in Eq (8.51) shows that the variable x in

Eq (8.51) has been replaced by (x−k) in this new equation. This makes perfectsense because, here, X is the total number of trials required to obtain exactlyk successes, which includes both the failures and the k successes, whereas in Eq(8.51), X is defined as the number of failures (not trials) before the ktℎ success,in which case, the total number of trials (required to obtain exactly k successes)is X + k.

8.11 From f(x), the pdf for the negative binomial random variable, we obtainthat:

f(x+ 1)

f(x)=

(x+ k)!x!

(x+ k − 1)!(x+ 1)!(1− p) =

(x+ k

x+ 1

)(1− p)

Thus,

½(k, x, p) =

(x+ k

x+ 1

)(1− p)

From here,

f(x+ 1)− f(x) =

[(x+ k

x+ 1

)(1− p)− 1

]f(x)

so that at the turning point, where f(x+ 1) = f(x), we have:

(x+ k)(1− p)− (x+ 1) = 0

which, when solved for x, yields the result:

x∗ =(1− p)k − 1

p

Again, note that if x∗ is an integer, then, by virtue of the fact that f(x∗) =f(x∗ + 1) at the maximum, the pdf attains a maximum at both x∗ and x∗ + 1.For example, with p = 0.5 and k = 3, f(x) is maximized when x∗ = 1 andx∗ = 2.

For the geometric random variable, X is defined as the total number of trials(not failures) required to obtain the first success. Thus, the geometric pdf isobtained from the alternate definition of the negative binomial pdf with k = 1;i.e., from:

f(x) =(x− 1)!

(k − 1)!(x− k)!pk(1− p)x−k

Thus, in this particular case:

f(x+ 1)

f(x)=

(x

x− k + 1

)(1− p)

Page 113: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

10 CHAPTER 8.

Specifically for the geometric random variable, with k = 1, we obtain:

f(x+ 1)

f(x)= (1− p) = q

or

f(x+ 1) = qf(x) (8.6)

And now, since

q = 1− p

is always such that 0 < q < 1, then Eq (8.6) indicates that the pdf for thegeometric random variable is monotonically decreasing, since each succeedingvalue of f(x) will be smaller than the immediately preceding one.

8.12 (i) For the geometric random variable with pdf:

f(x) = pqx−1

the expected value is obtained as:

E(X) =

∞∑x=1

xpqx−1

=p

q

∞∑x=1

xqx

=p

q

q

(1− q)2=

1

p

as required.

The variance is obtained from:

V ar(X) = E(X2)− (E(X))2

E(X2) is obtained as:

E(X2) =

∞∑x=1

x2pqx−1

=p

q

∞∑x=1

x2qx

=p

q

q(1 + q)

(1− q)3=

1 + q

p2

so that:

V ar(X) =1 + q

p2− 1

p2=

q

p2

Page 114: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

11

as required.(ii) From the given probabilities, f(2) = p(1− p) = 0.0475, or f(10)p(1− p)9 =0.0315, we obtain the geometric random variable parameter, p, as:

p = 0.05

The required probability, P (2 ≤ X ≤ 10), may be obtained several differentways. One way (which is a bit “pedestrian”) is to compute individual prob-abilities, f(2), f(3), . . . , f(10) and add them; alternatively, we could use thecumulative probability F (10) = P (X ≤ 10), and the fact that

P (X ≤ 10) = f(0) + f(1) + P (2 ≤ X ≤ 10)

From the computed value of p, we obtain (say from MINITAB, or any othersuch software package), that:

F (10) = P (X ≤ 10) = 0.4013; f(0) = 0; f(1) = 0.05

so that the required probability is obtained as:

P (2 ≤ X ≤ 10) = 0.401− 0.050 = 0.351

(iii) The implications of the supplied information are as follows:

E(X) =1

p= 200 ⇒ p = 0.005

with the required probability being P (X > 200). From here, we obtain thisprobability as:

P (X > 200) = 1− P (X ≤ 200) = 1− 0.633 = 0.367

Thus, 36.7% of the polymer product is expected to have chains longer than 200units.

8.13 For the given logarithmic series pdf to be a legitimate pdf, the followingcondition must hold: ∞∑

x=1

f(x) = ®

∞∑x=1

px

x= 1

which requires:

® =1

S(p)

where

S(p) =

∞∑x=1

px

x

To evaluate this infinite sum, we first differentiate once with respect to p toobtain:

dS

dp=

∞∑x=1

px−1 =

∞∑y=0

py =1

1− p

Page 115: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

12 CHAPTER 8.

which, upon integration, yields the result:

S = − ln(1− p)

Thus, the constant ® must be given by:

® =−1

ln(1− p)

as required.The expected value for this random variable is obtained as:

E(X) =

∞∑x=1

xf(x) = ®

∞∑x=1

px =®p

(1− p)

as required.The variance is obtained from:

V ar(X) = E(X2)− (E(X))2

First, we obtain E(X2) as:

E(X2) = ®

∞∑x=1

xpx =®p

(1− p)2

so that:

V ar(X) =®p

(1− p)2− ®2p2

(1− p)2=

®p(1− ®p)

(1− p)2

as required.By definition, the MGF is:

M(t) = E(etX) = ®

∞∑x=1

etx

xpx

If we represent the indicated sum by Se, i.e.,

Se =

∞∑x=1

etx

xpx

then,

dSe

dp=

∞∑x=1

etxpx−1 =1

p

∞∑x=1

rx =1

p

(r

1− r

)

wherer = pet

Thus,dSe

dp=

et

1− pet⇒ dSe =

etdp

1− pet

Page 116: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

13

so that upon integration, we obtain:

Se = − ln(1− pet)

Therefore,

M(t) = ®Se =ln(1− pet)

ln(1− p)

as required. The characteristic function follows immediately from the argumentsabove, upon replacing et with ejt.

8.14 The general negative binomial pdf is:

f(x) =(x+ k − 1)!

x!(k − 1)!pk(1− p)x

When

p =k

k + ¸; ⇒ p =

1(1 + ¸

k

)

so that:

1− p =¸

k + ¸

the general pdf given above becomes:

f(x) =(x+ k − 1)!

x!(k − 1)!(k + ¸)x¸x

(1 + ¸

k

)k

=

[(x+ k − 1)!

(k − 1)!(k + ¸)x

]¸x

x!(1 + ¸

k

)k (8.7)

There terms in the square brackets in Eq (8.7) above may be rewritten as follows:

[(x+ k − 1)!

(k − 1)!(k + ¸)x

]=

[k + (x− 1)][k + (x− 2)] ⋅ ⋅ ⋅ k(k − 1)!

(k − 1)!(k + ¸)x

=[k + (x− 1)][k + (x− 2)] ⋅ ⋅ ⋅ k

(k + ¸)x

where the numerator consists of exactly x terms. In the limit as k → ∞ there-fore, this ratio tends to 1, so that, in Eq (8.7) above,

limk→∞

f(x) =¸x

x!e−¸

which is the pdf for a Poisson random variable.

8.15 From the pdf for the Poisson random variable,

f(x) =¸x

x!e−¸

Page 117: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

14 CHAPTER 8.

we obtain

f(x+ 1) =¸x+1

(x+ 1)!e−¸

so that:

f(x+ 1)

f(x)=

¸

x+ 1

Thus, for 0 < ¸ < 1, we observe immediately that

f(x+ 1) < f(x)

always, so that under these conditions, the Poisson pdf is always monotonicallydecreasing. But when ¸ > 1,

f(x+ 1)− f(x) =

x+ 1− 1

)f(x)

so that, at the turning point, where f(x+ 1)− f(x) = 0,

¸ = x+ 1

implying that the maximum is attained at the value, x∗, given by

x∗ = ¸− 1

Observe that if ¸ is an integer, then x∗ will also be an integer; and by virtue ofthe fact that when the maximum is attained, f(x+ 1) = f(x), the implicationis that, under these conditions, the Poisson pdf will achieve a maximum at thetwo values:

x∗ = ¸− 1

x∗ + 1 = ¸

For example, for the Poisson random variable with ¸ = 3, this result states thatthe pdf achieves a maximum at x = 2 and x = 3. The computed values of thepdf are plotted for this specific case in Fig 8.3. Note the values of the pdf atthese two values of x in relation to values taken by the pdf at other values of x.

8.16 (i) The complete pdf for the indicated binomial random variable, fB(x),and the corresponding pdf for the indicated Poisson random variable, fP (x),are both shown in the table below.

Page 118: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

15

14121086420

0.25

0.20

0.15

0.10

0.05

0.00

x

f(x)

Poisson pdf: lambda=3

Figure 8.3: Illustrating the maxima of the pdf for a Poisson random variable with ¸ = 3.

x fB(x) fP (x)X ∼ Bi(10, 0.05) X ∼ P(0.5)

0 0.599 0.6071 0.315 0.3032 0.075 0.0763 0.010 0.0134 0.001 0.0025 0.000 0.0006 0.000 0.0007 0.000 0.0008 0.000 0.0009 0.000 0.00010 0.000 0.000

A plot of these two pdfs is shown in Fig 8.4, where the two are seen to bevirtually indistinguishable.

(ii) When n = 20 and p = 0.5 for the binomial random variable (note thehigh probability of “success”), and ¸ = 10 for the Poisson random variable, thecomputed values for the resulting pdfs are shown in the table below.

Page 119: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

16 CHAPTER 8.

1086420

0.6

0.5

0.4

0.3

0.2

0.1

0.0

X

f(x)

Binomial(10,0.05)

Poisson(0.5)

Variable

Figure 8.4: Comparison of the Bi(10,0.05) pdf with the Poisson(0.5) pdf.

x fB(x) fP (x)X ∼ Bi(20, 0.5) X ∼ P(10)

0 0.000 0.0001 0.000 0.0002 0.000 0.0023 0.001 0.0084 0.005 0.0195 0.015 0.0386 0.037 0.0637 0.074 0.0908 0.120 0.1139 0.160 0.12510 0.176 0.12511 0.160 0.11412 0.120 0.09513 0.074 0.07314 0.037 0.05215 0.015 0.03516 0.005 0.02217 0.001 0.01318 0.000 0.00719 0.000 0.00420 0.000 0.002

A plot of the two pdfs is shown in Fig 8.5, where we notice that, even though thetwo pdfs are somewhat similar, the differences between them are more obviousthan was the case in part (i) above. The reason for this is that in (i), the prob-

Page 120: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

17

20151050

0.20

0.15

0.10

0.05

0.00

X

f(x)

Binomial(20,0.5)

Poisson(10)

Variable

Figure 8.5: Comparison of the Bi(20,0.5) pdf with the Poisson(10) pdf.

ability of “success” is much lower that in part (ii). The fundamental connectionbetween these two random variables is such that the Poisson approximation ofthe binomial random variable is better for smaller probabilities of success.

8.17 From the characteristic function given in Section 8.7.3 for the Poissonrandom variable (see also Eq (6.116)), i.e.,

'Xi(t) = e¸i[(ejt−1)]

and from the results in Eq (6.54) in the main text, we know that 'Y (t), the cfof the random variable sum:

Y =

n∑

i=1

Xi

is obtained as:

'Y (t) =

n∏

i=1

'Xi(t) = e(¸1+¸2+⋅⋅⋅+¸n)[(ejt−1)] = e¸

∗[(ejt−1)]

where ¸∗ is the sum:

¸∗ =

n∑

i=1

¸i

By comparison with the cf of the Poisson random variable, X, the expressionshown above for 'Y (t) indicates that Y is also a Poisson random variable, withparameter ¸∗. The corresponding pdf is given by:

fY (y) =e−¸∗

(¸∗)y

y!; y = 0, 1, 2, . . .

Page 121: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

18 CHAPTER 8.

8.18 The required probabilities are obtained as follows: The event of “notexperiencing a yarn break in a particular shift” corresponds to x = 0, andtherefore,

P (X = 0∣¸ = 3) = 0.05

is the required probability in this case. For the event of “experiencing morethan 3 breaks per shift,”

P (X > 3∣¸ = 3) = 1− P (X ≤ 3) = 1− 0.647 = 0.353

8.19 This problem involves a Poisson random variable with intensity ´ = 0.0002per sq cm. In an area of size 1 sq m, (which is 104 sq cm),

¸ = 0.0002× 104 = 2

so that the required probability is obtained as

P (X > 2∣¸ = 2) = 1− P (X ≤ 2) = 0.323

8.20 The required probabilities are shown in the table below for the given valuesof ¸.

¸ P (X ≤ 2∣¸)0.5 0.9861 0.9202 0.6773 0.423

This shows the probability of observing 2 or fewer Poisson events monotonicallydecreasing as the mean number of occurrence increases. This makes perfectsense because as the mean number of occurrences increases, such conditions fa-vor an increasingly higher number of occurrences, so that commensurately, theprobability of observing two or fewer occurrences should decrease.

8.21 Obtaining a total number of y hatchlings from x eggs is akin to observingY successes from X trials, i.e., Y is a binomial random variable with parametersX (this time, itself a random variable) and p. In the current context, this impliesthat, given x, the number of eggs, the conditional pdf for Y is:

P (Y = y∣X = x) =

(x

y

)py(1− p)x−y; y = 0, 1, 2 . . . , x (8.8)

The total, unconditional pdf for Y is obtained by summing over all the possiblevalues of X, (keep in mind: there can never be more hatchlings than eggs) i.e.,

P (Y = y) =

∞∑x=y

P (Y = y∣X = x)P (X = x)

=

∞∑x=y

(x

y

)py(1− p)x−y

(¸xe−¸

x!

)

=

∞∑

k=0

(y + k

y

)py(1− p)k

(¸y+ke−¸

(y + k)!

)

Page 122: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

19

where we have introduced the variable change:

x = y + k

Observe that in physical terms, if x is the total number of eggs, and y is thenumber of successful hatchlings, then the newly introduced index, k, is thenumber of failed hatchlings.

The equation above may now be expanded and simplified further to yield:

P (Y = y) =

∞∑

k=0

(y + k)!

y!k!(p¸)y

¸ke−¸

(y + k)!(1− p)k

=(p¸)y

y!e−¸

∞∑

k=0

[(1− p)¸]k

k!

=(p¸)y

y!e−¸e(1−p)¸

=e−p¸(p¸)y

y!

as required.

Application Problems

8.22 (i) The problem involves a hypergeometric random variable, with N =15, Nd = 4, and n = 2; the required probabilities are therefore obtained easilyfrom the hypergeometric pdf as follows:(a) P (X = 2) = 0.057(b) P (X = 0) = 0.524(c) P (X = 1) = 0.419

(ii) If the problem had been misconstrued as involving a binomial random vari-able, with the proportion of irregular chips in the lot as the binomial probabilityof “success,” i.e.,

p =Nd

N=

4

15= 0.267

then the required probabilities, obtained from the Bi(n, p) pdf with n = 2, willbe:(a) f(2) = 0.071 compared to 0.057 above;(b) f(0) = 0.538 compared to 0.524;(c) f(1) = 0.391 compared to 0.419

8.23 This problem involves a trinomial random variable, with x1 as the totalnumber of pumps working continuously for fewer than 2 years; x2 as the totalnumber of pumps working continuously for 2 to 5 years; and x3 as the the total

Page 123: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

20 CHAPTER 8.

number of pumps working for more than 5 years. As specified by the problem,the probabilities of each of these events are, respectively, p1 = 0.3; p2 = 0.5; p3 =0.2. The appropriate joint pdf is therefore given by:

f(x1, x2) =n!

x1!x2!(n− x1 − x2)!px11 px2

2 pn−x1−x23

For this specific problem, n = 8, and x1 = 2, x2 = 5, x3 = 1, we therefore obtainthe required probability as:

f(2, 5) = 0.095

8.24 (i) The problem statement suggests that

50

N=

2

10

so that N = 250 is a reasonable estimate of the tiger population.The two most important sources of error are:

∙ The low sample size: a sample of 10 is too small for one to be confi-dent that its composition will be representative of the entire population’scomposition;

∙ It is also possible that the tagged tigers have not been completely “mixed”in uniformly with the population. Any segregation will lead to biasedsampling one way or another (either too few, or too many “tagged” tigersin the sample) and there will be no way of knowing which is which.

(ii) The applicable pdf in this case is:

f(x∣n, p) = n!

x!(n− x)!px(1− p)n−x (8.9)

from which we obtain the following table.

p f(x = 2∣10, p)0.1 0.1940.2 0.3020.3 0.234

The indication is that of the three postulated values for p, p = 0.2 yields thehighest probability of obtaining X = 2 tagged tigers from a sample of 10. Onthis basis alone, one would then consider that of the three postulated values ofp, p = 0.2 seems most “likely” to represent the data.(iii) In general, the pdf of interest is as shown in Eq (8.9) above, with x = 2, n =10. The optimum of this function can be determined via the usual calculus routeas follows:

df

dp= 2Cp(1− p)8 − 8Cp2(1− p)7 = 0

Page 124: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

21

(where C, a constant, represents the factorials). This expression simplifies im-mediately to

2Cp(1− p)7[(1− p)− 4p] = 0

so that provided that (1− p) ∕= 0, and p ∕= 0, this is solved to yield

p = 0.2

confirming that indeed p = 0.2 provides an optimum for f . A second derivativeevaluated at this value confirms that the optimum is indeed a maximum.

8.25 (i) From the supplied data, it is straightforward to obtain the followingempirical frequency distribution:

x Frequency Relative frequencyfE(x)

0 4 0.1331 8 0.2672 8 0.2673 6 0.2004 3 0.1005 1 0.0336 0 0.000

TOTAL 30 1.000

The corresponding histogram is shown in Fig 8.6. The expected value for xis

∑i xifE(xi) = 1.967.

(ii) Finding contaminant particles on a silicon wafer is a rare event, and theactual data shows that when such particles are found, they are few in number.This suggests that the underlying phenomenon is Poisson; the postulated modelis therefore:

f(x) =e−¸¸x

x!

Using ¸ = 2 (the result in (i) above rounded up to the nearest integer), we gen-erate the following theoretical f(x), shown along with the empirical probabilitydistribution in the following table:

Page 125: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

22 CHAPTER 8.

543210

9

8

7

6

5

4

3

2

1

0

Number of Flaws on Wafer

Frequency

Histogramof Number of Flaws onWafer

Figure 8.6: Histogram of silicon wafer flaws.

x f(x∣¸ = 2) fE(x)0 0.135 0.1331 0.271 0.2662 0.271 0.2663 0.180 0.2004 0.090 0.1005 0.036 0.0336 0.012 0.0007 0.003 0.0008 0.001 0.000

TOTAL 0.999 1.000

The theoretical distribution is seen to agree remarkably well with the empiricaldistribution.(iii) From the theoretical distribution, we obtain that

P (X > 2∣¸ = 2) = 1− P (X ≤ 2∣¸ = 2) = 1− 0.677 = 0.323

so that, on average, about 32.3% of the wafers will contain more than 2 flaws.(The data set shows 1/3 of the sample wafers with 3 or more flaws). Thus,according to the stipulated criterion, this particular process is no longer eco-nomically viable.

8.26 The frequency table and histogram generated from the supplied data areshown below:

Page 126: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

23

x Frequency Relative frequencyPumps fE(x)

4 1 0.0095 2 0.0376 1 0.1107 7 0.2248 8 0.2989 9 0.23510 2 0.083

TOTAL 30 0.996

10987654

9

8

7

6

5

4

3

2

1

0

Available Pumps

Frequency

Histogramof Available Pumps

Figure 8.7: Histogram of available pumps.

The appropriate probability model is binomial, with n = 10 as the number of“trials,” and p as the probability that any particular pump functions properly.

From the data, the average number of available pumps (those functioningproperly on any particular day) is 7.8 out of a total of 10. This is obtained inthe usual fashion by summing up the total number of available pumps each day(234) and dividing by the total number of days (30).

From the expression for the mean of a Bi(n, p) random variable, the impli-cation is that

np = 7.8

and since n = 10, we obtain:

p = 0.78

as an estimate of the binomial random variable probability of “success” in thiscase, the probability that any particular pump functions properly on any par-

Page 127: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

24 CHAPTER 8.

ticular day. From here, we are able to compute the theoretical pdf; this pdf isshown in the table below along with the corresponding empirical data.

x Theoretical pdf Relative frequencyPumps f(x∣p = 0.78) fE(x)

4 0.033 0.0095 0.067 0.0376 0.033 0.1107 0.233 0.2248 0.267 0.2989 0.300 0.23510 0.067 0.083

TOTAL 1.000 0.996

The theoretical pdf (with p = 0.78) and the empirical frequency distribution arecompared graphically in Fig 8.8 below where the two are seen to be reasonablyclose. The binomial model therefore appears to be adequate.

10987654

0.30

0.25

0.20

0.15

0.10

0.05

0.00

X (number of available pumps)

f(x)

Empirical pdf

Theoretical (p=0.78)

Variable

Figure 8.8: Empirical distribution of available pumps (solid line; circles) and theoreticalbinomial pdf with n = 10; p = 0.78 (dashed line; squares).

8.27 (i) LetX represent the total number of pumps functioning at any particularpoint in time. Then the problem translates to determining the probability thatx ≥ 4 for a binomial random variable, where n = 8, and p = 1− 0.16 = 0.84 isthe probability that the selected pump will function (since the probability thatthe pump will fail is given as 0.16). The required probability is P (X ≥ 4); andsince X is discrete, this probability is obtained as:

P (X ≥ 4) = 1− P (X ≤ 3) = 1− 0.004 = 0.996

Page 128: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

25

(ii) In this case, first, we need to determine P (X ≤ 5), which is obtained as0.123. This indicates that on average, the alarm will go off approximately 12.3percent of the time. If a “unit of time” is assumed to be a day, then in aperiod of 30 days, one would expect the alarm to go off 12.3 × 30 = 3.69,approximately 4, times. (Any other reasonable assumption about the “unit oftime” is acceptable.)(iii) Currently, with the probability of failure as 0.16—so that the probability offunctioning is 0.84—the probability that four or more pumps will fail (which,with n = 8 as the total number of pumps, is equivalent to the probability that4 or fewer pumps will function) is obtained as:

P (X ≤ 4∣p = 0.84) = 0.027

If the probability of failure increases to 0.2 so that the probability of functioningdecreases to 0.8, we obtain that:

P (X ≤ 4∣p = 0.8) = 0.056

and the percentage increase, Δ%, is obtained as

Δ% =0.056− 0.027

0.027= 107.4%

a surprisingly large percentage increase, given that the increase in the proba-bility of failure (from 0.16 to 0.2) appears to be relatively innocuous. (Thisportion of the problem could also have been approached from the perspectiveof pump failures, say with Y as the total number of failed pumps: the resultswill be the same.)

8.28 (i) The mean number of accidents is obtained as obtained from

x =

∑i xifi∑i fi

= 0.465

where xi is the number of accidents, and fi, the corresponding observed fre-quency with which xi accidents occurred. The variance is obtained as:

¾2 = 0.691

For a true Poisson process, the mean and the variance are theoretically equal,and will be close in practice. This is clearly not the case here; in fact, thevariance is much larger than the mean, implying that this is an “overdispersed”Poisson-like phenomenon.(ii) Using ¸ = 0.465, the theoretical Poisson pdf is shown in the followingtable, and upon multiplying by 647, the total number of subjects, we obtain theindicated predicted frequency. The table also includes the observed frequencyfor comparison.

Page 129: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

26 CHAPTER 8.

x, Number Poisson pdf Predicted Observedof Accidents f(x∣¸ = 0.465) Frequency Frequency

0 0.628 406.403 4471 0.292 188.978 1322 0.068 43.937 423 0.011 6.810 214 0.001 0.792 35+ 0.000114 0.074 2

From here, other than the values obtained for x = 2, the disagreement betweenthe corresponding predicted and observed frequencies is quite notable.(iii) The relationships between the mean and variance of a negative binomialrandom variable and the parameters k and p, are:

¹ =kq

p

¾2 =kq

p2

from where we obtain the inverse relationships:

p =¹

¾2; k =

¹p

q

In this specific case, with ¹ = 0.465 and ¾2 = 0.691, we obtain the followingestimates for the corresponding negative binomial pdf parameters:

p = 0.673; k = 0.96 (rounded up to k = 1)

The resulting pdf for the negative binomial random variable with these param-eters, and the corresponding predicted frequency, are shown in the table belowalong with the observed frequency for comparison.

x, Number Neg Binomial(1,0.673) Predicted Observedof Accidents pdf, f(x) Frequency Frequency

0 0.673 435.431 4471 0.220 142.386 1322 0.072 46.560 423 0.024 15.225 214 0.008 4.979 35+ 0.003 1.628 2

The agreement between the frequency predicted by the negative binomialmodel and the observed frequency is seen to be quite good—much better thanthe Poisson model prediction obtained earlier. Fig 8.9 shows a plot of the ob-served frequency (solid line, circles) compared with the Poisson model prediction(short dashed line, diamonds), and the negative binomial model prediction (long

Page 130: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

27

dashed line, squares). This figure provides visual evidence that the negative bi-nomial model prediction agrees much better with the observation than does thePoisson model prediction.

543210

500

400

300

200

100

0

X, Number of Accidents

Fre

qu

en

cy

ObsFreq

PredFreq(NBi)

PredFreq(Poisson)

Variable

Figure 8.9: Frequencies of occurrence of accidents: Greenwood and Yule data (solid line;circles), negative binomial model prediction with k = 1; p = 0.673 (long dashed line;squares), and Poisson model prediction with ¸ = 0.465 (short dashed line; diamonds).

(iii) The objective measure of the “goodness-of-fit,” C2, defined in Eq (8.100),is computed as follows for each model. First, for the Poisson model:

C2P =

(447− 406.403)2

406.403+

(132− 188.978)2

188.978+

(42− 43.937)2

43.937+

(21− 6.810)2

6.810+

(5− 0.866)2

0.866= 70.619

and, for the negative binomial model:

C2NB =

(447− 435.431)2

435.431+

(132− 142.386)2

142.386+

(42− 46.560)2

46.560+

(21− 15.225)2

15.225+

(5− 6.604)2

6.604= 4.092

We may now observe that the value of the “goodness-of-fit” quantity is muchsmaller for the negative binomial model than for the Poisson model. From thedefinition of this quantity, the better the fit, the smaller the value of C2. Hencewe conclude that the negative binomial model provides a much better fit to the

Page 131: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

28 CHAPTER 8.

Greenwood and Yule data than does the Poisson model.

8.29 (i) Let X1 represent the number of children with sickle-cell anemia (SCA),and X2, the number of children that are carriers of the disease. If n is the totalnumber of children born to the couple, then, the joint probability distributionof the bivariate random variable in question, (X1, X2), is the following trinomialdistribution:

f(x1, x2) =n!

x1!x2!(n− x1 − x2)!px11 px2

2 (1− p1 − p2)n−x1−x2

In this specific case, n = 4, p1 = 0.25, and p2 = 0.5.

(ii) The required probabilities are obtained by substituting appropriate valuesfor x1 and x2 into the joint pdf above; the results are as follows:(a) x1 = 0;x2 = 2; f(x1, x2) = 0.094(b) x1 = 1;x2 = 2; f(x1, x2) = 0.188(c) x1 = 2;x2 = 2; f(x1, x2) = 0.094

(iii) In this case, the pdf required for computing the probabilities of interest isthe conditional probability, f(x1∣x2 = 1). In general, for the trinomial randomvariable, the conditional pdf, f(x1∣x2), is:

f(x1∣x2) =f(x1, x2)

f2(x2)=

(n− x2)!

x1!(n− x1 − x2)!

px11

(1− p2)n−x2(1− p1 − p2)

n−x1−x2

(see Problem 8.8); and in this specific case, with x2 = 1, we obtain:

f(x1∣x2 = 1) =(n− 1)!

x1!(n− x1 − 1)!

px11

(1− p2)n−1(1− p1 − p2)

n−x1−1

which, with n = 4, simplifies to:

f(x1∣x2 = 1) =3!

x1!(3− x1)!

0.25x1

0.53(0.25)3−x1 =

3!0.53

x1!(3− x1)!

The required probabilities are computed from here to yield the following results:(a) x1 = 0; f(x1 = 0∣x2 = 1) = 0.125(b) x1 = 1; f(x1 = 1∣x2 = 1) = 0.375(c) x1 = 2; f(x1 = 2∣x2 = 1) = 0.375(d) x1 = 3; f(x1 = 3∣x2 = 1) = 0.125Note that these four probabilities sum up to 1 since they constitute the entirecollection of all possible outcomes once x2 is fixed at 1.

8.30 (i) With X1 as the number of children with the disease, what is required isE(X1). Because the marginal distribution of X1 is binomial, Bi(n, p), we knowin this case that E(X1) = np = 8×0.25 = 2. The implication is that the couplecan expect to have 2 children with the disease, so that the annual expected cost

Page 132: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

29

will the equivalent of US$4000.(ii) Let Z represent the number of crisis episodes that this family will endureper year. This random variable is a composite variable consisting of two parts:first, to experience an episode, the family must have children with the sickle-cellanemia (SCA) disease. From Problem 8.29 above, X1, the number of childrenwith the disease, is a random variable with a Bi(n = 8, p1 = 0.25) distribution.Secondly, any child with the disease will experience Y “crisis” episodes per year,itself another random variable with a Poisson, P(¸ = 1.5) distribution. Thus,Z is a compound random variable, which, as derived in Problem 8.29 above,possesses the distribution:

f(z) =e−p1¸(p1¸)

z

z!

In this specific case with p1 = 0.25 and ¸ = 1.5, the appropriate pdf is:

f(z) =e0.375(0.375)z

z!

from where we obtainf(3) = 0.006

Thus, the probability is 0.006 that this family will endure 3 crisis episodes inone year.

8.31 (i) The phenomenon in question is akin to the situation where one ex-periences X “failures” (i.e., failure to identify infected patients) before the ktℎ

“success” (i.e., successfully identifying only k out of a total of X + k infectedpatients), with the probability of “success” being 1/3. The appropriate proba-bility model is therefore a negative binomial, NBi(k, p) distribution, with k = 5and p = 1/3. The pdf is given by:

f(x) =(x+ k − 1)!

x!(k − 1)!pk(1− p)x

From this general expression, one may find the desired maximum, x∗, ana-lytically, or else numerically—by computing the probabilities for various valuesof x, and then identifying the value of x for which the computed probability islargest. We consider the analytical route first. From the pdf given above, weobserve that:

f(x+ 1) =(x+ k)!

(x+ 1)!(k − 1)!pk(1− p)x+1

so that, after some simplification,

f(x+ 1)

f(x)=

(x+ k

x+ 1

)(1− p)

At the maximum—the turning point of this discrete function—the condition,f(x+ 1) = f(x), must hold; i.e., in this case:

(x+ k)(1− p) = (x+ 1)

Page 133: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

30 CHAPTER 8.

which, when solved for x, yields:

x∗ =kq − 1

p; (q = 1− p)

Of course, if x∗ is an integer, f(x) will also show a maximum at (x∗ + 1) byvirtue of the condition for the optimum whereby f(x+ 1) = f(x).

For this specific problem, with k = 1 and p = 1/3, we obtain that themaximum occurs at

x∗ =103 − 1

13

= 7

and because this is an integer, x∗ = 8 is also a valid maximum point.

Numerically, the probability distribution function may be computed for anegative binomial random variable with k = 1 and p = 1/3 for various valuesof x; the result is shown in the table below and plotted in Fig 8.10. Observethat the values at which the maximum probability occurs are confirmed indeedto be x = 7 and x = 8, as determined analytically earlier.

As such, the “most likely” number of infected but not yet symptomaticpatients is 8 (choosing the larger value). The implication is that with 5 alreadyidentified, the total population of infected patients is most likely to be 13.

x f(x)0 0.00411 0.01372 0.02743 0.04274 0.05695 0.06836 0.07597 0.07958 0.07959 0.076510 0.071411 0.064912 0.057713 0.050314 0.043115 0.0364

(ii) With the random variable, X, defined as the number of patients that areinfected but not yet identified, the total number of infected patients will beX + k; thus, the required probability is P (X + k > 15) which translates toP (X > 10) and determined as:

P (X > 10) = 1− P (X ≤ 10) = 1− 0.596 = 0.404

Page 134: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

31

1614121086420

0.08

0.07

0.06

0.05

0.04

0.03

0.02

0.01

0.00

x

f(x)

Figure 8.10: Probability distribution function (pdf) for the negative binomial randomvariable with k = 5, n = 1/3; it is maximized at x = 7 and x = 8.

This shows that there is a fairly sizable probability of 0.4 that, with 5 patientsalready identified as infected, the town may have to declare a state of emergency.

8.32 (i) The appropriate model is the Poisson pdf, with ¸ = 8.5; the requiredprobability is obtained as:

P (X = 10∣¸ = 8.5) = 0.11

(ii) From the results in Exercise 8.15, we obtain:

x∗ = ¸− 1 = 7.5

which is not an integer, hence, it is a unique maximum. The “most likely” num-ber of failures is therefore 7.5 per year. The required probability (of having morefailures in one year than this “most likely” number of failures) is determined as:

P (X ≥ 7.5) = 1− P (X ≤ 7.5) = 1− P (X ≤ 7) = 1− 0.386 = 0.614

(Note: because X can only take integer values, P (X ≤ 7.5) = P (X ≤ 7).)(iii) The required probability, P (X ≥ 13), is obtained as:

P (X ≥ 13) = 1− P (X ≤ 12) = 1− 0.909 = 0.091

The implication is that, if conditions are “typical,” there is a fairly small chance(just about 9%) that one would see 13 or more failures in one year. If this eventthen occurs, there is therefore a reasonable chance that things may no longerbe “typical,” that something more fundamental may be responsible for causing

Page 135: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

32 CHAPTER 8.

such an “unusually” large number of failures in one year.

8.33 (i) Assuming that the indicated frequencies of occurrence for each accidentcategory is representative of event probabilities, the phenomenon in question ismultinomial, with the pdf:

f(x1, x2, x3, x4) =n!

x1!x2!x3!x4!px11 px2

2 px33 px4

4

where x1 is the number of eye injuries, with p1 = 0.40 as the probability ofrecording a single eye injury; x2 is the number of hand injuries, with p2 = 0.22as the probability of recording a single hand injury; x3 is the number of backinjuries, with p3 = 0.20 as the probability of recording a single back injury; andfinally, x4 is the number of “other” injuries, with p4 = 0.18 as the probability ofrecording one of the injuries collectively categorized as “other.” With n = 10,(i.e., a total of 10 recorded injuries selected at random, distributed as indicated)the required probability, P (x1 = 4;x2 = 3;x3 = 2;x4 = 1) is obtained as:

f(4, 3, 2, 1) =10!

4!3!2!1!0.440.2230.2020.181 = 0.0011

This is the probability that the 10 recorded injuries selected at random aredistributed as noted in the problem statement. The probability appears to bevery small mostly because there are so many different ways in which 10 injuriescan be distributed among the 4 categories.(ii) Since we are concerned here with eye injuries alone, regardless of the otherinjuries, we recall that the marginal distribution of each component variable,Xi, of a multinomial random variable is a binomial pdf, i.e., in this case, X1 ∼Bi(n, p1). Thus the required probability is obtained from the binomial Bi(5, 0.4)pdf as

P (X < 2) = P (X ≤ 1) = 0.337

(iii) Since, once again as in (ii), n = 5, we now require a value for p1 such that:

P (X < 2) ≈ 0.9; i.e.,

P (X < 2) = P (X ≤ 1) = f(0) + f(1) = (1− p1)5 + 5P1(1− p1)

4 ≈ 0.9

Using MINITAB (or any such program), we determine that for p = 0.11, thiscumulative probability is obtained as 0.903. Thus, the target probability to aimfor is a reduction of p1 from 0.4 to 0.11.

8.34 (i) The phenomenon of attempting 25 missions before the first accidentoccurs is akin to attempting x = 25 “trials” before obtaining the first “success”in a process where each trial has two mutually exclusive (i.e., binary) outcomes.This, of course, is the phenomenon underlying the geometric random variable,with the pdf:

f(x) = pqx−1

Page 136: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

33

In this case, x is the number of missions undertaken prior to the occurrence ofthe first accident, and p is the probability of the indicated catastrophic accidentoccurring. Thus, for x = 25 and p = 1/35 = 0.02857, we obtain the requiredprobability as:

f(25) = 0.014

(ii) In this case, the required probability is P (X ≤ 25) for the geometric randomvariable, X; the result is:

P (X ≤ 25) = 0.516

with the very illuminating implication that if a catastrophic event such as thatexperienced on Jan 28, 1986 were to occur, there is more than a 50% chancethat it would occur on or before the 25th mission attempt.(iii) If p = 1/60, 000, the probability P (X ≤ 25) becomes:

P (X ≤ 25) = 0.0004167

(we deliberately retained so many significant figures to make a point). Theindication is that such an event is extremely unlikely.

In light of the historical fact that this catastrophic event did in fact occur onthe 25th mission attempt, it appears as if the independent NASA study grosslyunderestimated the value p; the Air Force estimate, on the other hand, definitelyappears to be much more representative of the actual value.

8.35 (i) The required probability is obtained from the Poisson distribution,P(¸ = 0.75) as:

P (X ≤ 2∣¸ = 0.75) = 0.959

Note: this implies that the probability that there will be 3 or more claims peryear on the car in question is 1− 0.959 = 0.041.(ii) The problem requires determining xu such that:

P (X ≥ xu) ≤ 0.05

In terms of cumulative probabilities, this is equivalent to requiring:

1− P (X < xu) ≤ 0.05; or P (X < xu) ≥ 0.95

The following cumulative probabilities for the Poisson random variable with¸ = 0.75 can be obtained from computer packages such as MINITAB:

P (X < 4) = P (X ≤ 3) = 0.993

P (X < 3) = P (X ≤ 2) = 0.959

P (X < 2) = P (X ≤ 1) = 0.827

Observe that the smallest value of xu for which P (X < xu) exceeds 0.95 is 3,so that the desired value of xu is 3. Hence, any car with claims totalling 3 ormore in one year will be declared to be of “poor initial quality.”

Page 137: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

34 CHAPTER 8.

8.36 By definition, the average is obtained as:

x =

∑i xiΦ(xi)∑i Φ(xi)

=3306

501= 6.599

From the result in Exercise 8.13, we know that for this random variable:

¹ = E(X) = ®p/(1− p); where ® =−1

ln(1− p)

Thus, given x = 6.599 as an estimate of ¹, we must now solve the followingnonlinear equation numerically for p:

6.599 =−p

(1− p) ln(1− p)

The result is:

p = 0.953

Upon introducing this value into the logarithmic series pdf, f(x) = ®px

x , theresulting predicted frequency, obtained as:

Φ(x) = 501f(x)

is shown in the following table, along with the observed frequency, with bothfrequencies plotted in Fig 8.11. From this table and the plots, the model appearssufficiently adequate.

2520151050

160

140

120

100

80

60

40

20

0

X

Frequency

Observed

Predicted

Variable

Figure 8.11: Empirical frequency distribution of the Malaya butterfly data (solid line,circles) versus theoretical logarithmic series model, with p = 0.953 (dashed line, squares).

Page 138: Solutions Manual Random Phenomena Fundamentals and Engineering Applications of Probabiltiy and Statistics

35

No of Observed Predictedspecies Frequency Frequency

x Φ(x) Φ(x)1 118 156.1522 74 74.4073 44 47.2734 24 33.7885 29 25.7606 22 20.4587 20 16.7118 19 13.9359 20 11.80510 15 10.12511 12 8.77212 14 7.66313 6 6.74114 12 5.96515 6 5.30616 9 4.74017 9 4.25218 6 3.82719 10 3.45520 10 3.12821 11 2.83922 5 2.58323 3 2.35424 3 2.150


Top Related