Introduction to Stochastic Modeling 4th Edition Pinsky Solutions Manual

Page 1

Introduction to Stochastic Modeling 4th Edition Pinsky Solutions Manual Full Download: http://alibabadownload.com/product/introduction-to-stochastic-modeling-4th-edition-pinsky-solutions-manual/

An Introduction to Stochastic Modeling Fourth Edition Instructor Solutions Manual

Mark A. Pinsky Department of Mathematics Northwestern University Evanston, Illinois

Samuel Karlin Department of Mathematics Stanford University Stanford, California

AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier

This sample only, Download all chapters at: alibabadownload.com


Academic Press is an imprint of Elsevier 225 Wyman Street, Waltham, MA 02451, USA The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK c 2011 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN: 978-0-12-385232-8 For information on all Academic Press publications, visit our website at www.elsevierdirect.com Typeset by: diacriTech, India


Contents

Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

Chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

Chapter 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

Chapter 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68



Chapter 1

1 1 2.1 E [1 {A1 }] = Pr {A1 } = 13 . Similarly, E [1 {A1 }] = Pr {Ak } = 13 for k = 1, . . . , 13. Then, because the expected value of a sum 1 1 + · · · + 13 = 1. is always the sum of the expected values, E[N] = E [1 {A1 }] + · · · + E [1 {A13 }] = 13

2.2 Let X be the first number observed and let Y be the second. We use the identity (6xi )2 = 6xi2 + 6i6=j xi xj several times. 1X xi ; N P P X 2 (N − 1) xi2 − i6=j xi xj 1X 2 1 Var[X] = Var[Y] = xi − xi = ; N N N2 P i6=j xi xj E[XY] = ; N(N − 1) P P 2 xi i6=j xi xj − (N − 1) Cov[X, Y] = E[XY] − E[X]E[Y] = N 2 (N − 1) Cov[X, Y] 1 ñX,Y = =− . σX σY N −1 E[X] = E[Y] =

2.3 Write Sr = ξ1 + · · · + ξr where ξk is the number of additional samples needed to observe k distinct elements, assuming that k − 1 district elements have already been observed. Then, defining pk = Pr[ξk = 1} = 1 − k−1 N we have Pr [ξk = n] = pk (1 − pk )n−1 for n = 1, 2, . . . and E [ξk ] = p1k . Finally, E [Sr ] = E [ξ1 ] + · · · + E [ξr ] = p11 + · · · + p1r will verify the given formula. n−1

n−1

z }| { z }| { 2.4 Using an obvious notation, the event {N = n} is equivalent to either HTH . . . HTT or THT . . . THH so Pr {N = n} = n−1 n−1 n−1 n−1 P P 31 2 × 21 × 12 = 12 for n = 2, 3, . . . ; Pr {N is even} = n=2,4,... 12 = 23 and Pr {N ≤ 6} = 6n=2 12 = 32 . n−1 P 21 = 32 . Pr {N is even and N ≤ 6} 5 m=2,4,6 12   In losses  z  }| { 2.5 Using an obvious notation, the probability that A wins on the 2n + 1 trial is Pr Ac Bc . . . Ac Bc A = [(1 − p)(1 − q)]n p,     P∞ p n = 0, 1, . . . Pr{A wins} = n = 0 [(1 − p)(1 − q)]n p = 1−(1−p)(1−q) . Pr{A wins on 2n + 1 play|A wins} = (1 − π)π n where P 1+ (1−p)(1−q) 2π 2 n π = (1 − p)(1 − q). E # trials|A wins = ∞ n=0 (2n + 1)(1 − π)π = 1 + 1−π = 1− (1−p)(1−q) = 1− (1−p)(1−q) − 1. n−1 5 2.6 Let N be the number of losses and let S be the sum. Then Pr{N = n, S = k} = 16 6 Pk where p3 = p11 = p4 = p10 = P∞ 1 2 3 n=1 Pr{N = n, S = k} = pk . (It is not a correct argument to 15 ; p5 = p9 = p6 = p8 = 15 and p7 = 15 . Finally Pr{S = k} = simply say Pr{S = k} = Pr{Sum of 2 dice = k|Dice differ}. Compare with Exercise II, 2.1.) 2.7 We are given that (*) Pr{U > u, W > w} = [1 − Fu (u)] [1 − Fw (w)] for all u, w. According to the definition for independence we wish to show that Pr{U ≤ u, W ≤ w} = Fu (u)Fw (w) for all u, w. Taking complements and using the addition law Pr{U ≤ u, W ≤ w} = 1 − Pr{U > u or W > w} = 1 − [Pr{U > u} + Pr{W > w} − Pr{U > u, W > w}] = 1 − [(1 − FU (u)) + (1 − FW (w)) − (1 − Fu (u)) (1 − Fw (w))] = FU (u)FW (w) after simplification. An Introduction to Stochastic Modeling, Instructor Solutions Manual c 2011 Elsevier Inc. All rights reserved.


2

Instructor Solutions Manual

R R R 2.8 (a) E[Y] = E[a + bX] = (a + bx)dFX (x) = a dFX (x) + b xdFX (x) = a + bE[X] = a + bµ. In words, (a) implies that the expected value of a constant times a random variable is the constant times the expected value of the random variable. So E b2 (X − µ)2 = b2 E (X − µ)2 . (b) Var[Y] = E (Y − E{Y})2 = E (a + bX − a − bµ)2 = E b2 (X − µ)2 = b2 E (X − µ)2 = b2 σ 2 2.9 Use the usual sums of numbers formula (See I, 6 if necessary) to establish n X k=1 n X

1 k(n − k) = n(n + 1)(n − 1); and 6

k2 (n − k) = n

X

k2 −

X

k=1

k3 =

1 2 n (n + 1)(n − 1), so 12

X 2 1 k(n − k) = (n + 1) n(n − 1) 3 h i X 1 3 k2 (n − k) = n(n + 1), and E X2 = n(n − 1) 6 h i 1 Var[X] = E X 2 − (E[X])2 = (n + 1)(n − 2). 18 2.10 Observe, for example, Pr{Z = 4} = Pr{X = 3, Y = 1} = 12 61 , using independence. Continuing in this manner, E[X] =

z

1

2

3

4

5

6

Pr{Z = z}

1 12

1 6

1 4

1 12

1 6

1 4

2.11 Observe, for example, Pr{W = z} = Pr{U = 0, V = 2} + Pr{U = 1, V = 1} =

1 6

+ 16 + 13 . Continuing in this manner, arrive at

w

1

2

3

4

Pr{W = w}

1 6

1 3

1 3

1 6

2.12 Changing any of the random variables by adding or subtracting a constant will not affect the covariance. Therefore, by replacing U with U − E[U], if necessary, etc, we may assume, without loss of generality that all of the means are zero. Because the means are zero, Cov[X, Y] = E[XY] − E[X]E[Y] = E[XY] = E UV − UW + VW − W 2 = −E W 2 = −σ 2 . (E[UV] = E[U]E[V] = 0, etc.) 2.13 Pr{v < V, U ≤ u} = Pr{v < X ≤ u, v < Y ≤ u} = Pr{v < X ≤ u} Pr{v < Y ≤ u} (by independence) = (u − v)2 ZZ =

fu,v u0 , v0 du0 dv0

(u0 ,v0 )v<v0 ≤u0 ≤u

=

 Zu Zu v

v0

fu,v

  u0 , v0 du0 dv0 . 

The integrals are removed from the last expression by successive differentiation, first w.r.t. v (changing sign because v is a lower limit) than w.r.t. u. This tells us fu,v (u, v) = −

d d (u − v)2 = 2 for 0 < v ≤ u ≤ 1. du dv


Instructor Solutions Manual

3

3.1 Z has a discrete uniform distribution on 0, 1, . . . , 9. 3.2 In maximizing a continuous function, we often set the derivative equal to zero. In maximizing a function of a discrete variable, we equate the ratio of successive terms to one. More precisely, k∗ is the smallest k for which p(k+1) p(k) < 1, or, the smallest k p n−k for which k+1 1−p < 1. Equivently, (b) k∗ = [(n + 1)p] where [x] = greatest integer ≤ x. for (a) let n → ∞, p → 0, λ = np. Then k∗ = [λ]. 3 2 3 3 5 2 3.3 Recall that eλ = 1 + λ + λ2! + λ3! + · · · and e−λ = 1 − λ + λ2! − λ3! + · · · so that sinh λ ≡ 12 eλ − e−λ = λ + λ3! + λ5! + · · · P λk e−λ −λ sinh(λ) = 1 1 − e−2λ . Then Pr{X is odd} = k! = e 2 k=1,3,5,···

3.4 E[V] =

∞ X k=0

= 3.5

1 λk e−λ e−λ X λk+1 = k + 1 k! λ (k + 1)! k=0

1 1 −λ λ e e − 1 = 1 − e−λ . λ λ

E[XY] = E[X(N − X)] = NE[X] − E X 2 = N 2 p − Np(1 − p) + N 2 p2 = N 2 p(1 − p) − Np(1 − p) Cov[X, Y] = E[XY] − E[X]E[Y] = −Np(1 − p).

3.6 Your intuition should suggest the correct answers: (a) X1 is binomially distributed with parameters M and π1 ; (b) N is binomial with parameters M and π1 + π2 ; and (c) X1 , given N = n, is conditionally binominal with parameters n and p = π1 /(π1 + π2 ). To derive these correct answers formally, begin with Pr {X1 = i, X2 = j, X3 = k} =

M! i j k π π π ; i + j + k = M. i! j! k! 1 2 3

Since k = M − (i + j) Pr {X1 = i, X2 = j} = (a) Pr {X1 = i} =

X

M! j M−i−j πiπ π ; 0 ≤ i + j ≤ M. i! j! (M − i − j)! 1 2 3

Pr {X1 = i, X2 = j}

j M−i

=

X (M − i)! M! j M−i−j π1i π π i! (M − i)! j! (M − i − j)! 2 3 j=0

=

M π1i (π2 + π3 )M−i , i = 0, 1, . . . , M. i

(b) Observe that N = n if and only if X3 = M – n. Apply the results of (a) to X3 : Pr{N = n} = Pr {X3 = M − n} =

(c) Pr { X1 = k| N = n} =

M! (π1 + π2 )n π3M−n n! (M − n)!

Pr {X1 = k, X2 = n − k} Pr{N = n}

=

M! k n−k M−n k!(M−n)!(n−k)! π1 π2 π3 M! n M−n n!(M−n)! (π1 + π2 ) π3

=

k n−k n! π1 π2 , k = 0, 1, . . . , n. k! (n − k)! π1 + π2 π1 + π2


4

Instructor Solutions Manual

3.7 Pr{Z = n} =

= =

n X

Pr{X = k}Pr{Y = n − k}

k=0 n X

µk e−µ υ (n−k) e−υ 1X n! = e−(µ+υ) µk υ n−k k! (n − k)! n! k! (n − k)! n

k=0 e−(µ+υ) (µ + υ)n

n!

k=0

(Using binomial formula.)

Z is Poisson distributed, parameter µ + υ. 3.8 (a) X is the sum of N independent Bernoulli random variables, each with parameter p, and Y is the sum of M independent Bernoulli random variables each with the same parameter p. Z is the sum of M + N independent Bernoulli random variables, each with parameter p. (b) By considering the ways in which a committee n people may be formed from a group comprised of M men and N of n P M+N N M women, establish the identity = . n k n −k k=0 Then Pr{Z = n} =

n X

Pr{X = k}Pr{Y = n − K}

k=0 n X

M N k N−k pn−k (1 − p)M−n+k = p (1 − p) n−k k k=0 M+N n p (1 − p)M+N−n for n = 0, 1, . . . , M + N. = n Note: N = 0 for k > N. k 3.9 Pr{X + Y = n} =

n X

n X Pr{X = k, Y = n − k} = (1 − π)π k (1 − π)π n−k

k=0

= (1 − π) π

2 n

n X

k=0

1 = (n + 1)(1 − π)2 π n for n = 0.

k=0

3.10

3.11

k

Binomial n = 10 p = .1

Binomial n = 100 p = .01

Poisson λ=1

0 1 2

.349 .387 .194

.366 .370 .185

.368 .368 .184

Pr{U = u, W = 0} = Pr{X = u, Y = u} = (1 − π)2 π 2u , u ≥ 0. Pr{U = u, W = w > 0} = Pr{X = u, Y = u + w} + Pr{Y = u, X = u + w} = 2(1 − π)2 π 2u+w ∞ X Pr{U = u} = Pr{U = u, W = w} = π 2u 1 − π 2 . Pr{W = 0} =

w=0 ∞ X

Pr{U = u, W = 0} = (1 − π)2

.

1 − π2 .

w=0

h . i Pr{W = w > 0} = 2 (1 − π)2 (1 − π)2 1 − π 2 π w , and Pr{U = u, W = w} = Pr{U = u}Pr{W = w} for all u, w.


Instructor Solutions Manual

5

3.12 Let X = number of calls to switch board in a minute. Pr{X ≥ 7} = 1 −

6 P k=0

4k e−4 k! .111.

3.13 Assume that inspected items are independently defective or good. Let X = # of defects in sample. Pr{X = 0} = (.95)10 = .599 Pr{X = 1} = 10(.95)9 (.05) = .315 Pr{X = 2} = 1 − (.599 + .315) = .086. 3.14 (a) E[Z] =

1−p p

= 9, Var[Z] =

1−p p2

= 90

(b) Pr{Z > 10} = (.9)10 = .349. 2 3.15 Pr{X ≤ 2} = 1 + 2 + 22 e−2 = 5e−2 = .677. 3.16 (a) p0 = 1 − b

∞ P k=1

. (1 − p)k = 1 − b 1−p p

(b) When b = p, then pk is given by (3.4). p When b = 1−p , then pk is given by (3.5). (c) Pr{N = n > 0} = Pr{X = 0, Z = n} + Pr{X = 1, Z = n − 1} = (1 − α)p(1 − p)n + αp(1 − p)n−1 = (1 − α)p + αp (1 − p) (1 − p)n So b = (1 − α)p + αp (1 − p).   Z+∞  1 Z+∞ 1  λZ 1 2 1 2 1 2 1 2 e 2 z +λz dz = e 2 λ √ e 2 (z−λ) dz = e 2 λ . 4.1 E e =√   2Ï€ 2Ï€ −∞

−∞

4.2 (a) PrW > θ1 = e−θ/θ = e−1 = .368 . . . . (b) Mode = 0. h i 4.3 X − θ and Y − θ are both uniform over − 21 , 12 , independent of θ, and W = X − Y = (X − θ) − (Y − θ). Therefore the distribution of W is independent of θ and we may determine it assuming θ = 0. Also, the density of W is symmetric since that of both X and Y are. 1 Pr{W > w} = Pr{X > Y + w} = (1 − w)2 , 2 So fw (w) = 1 − w

w>0

0 ≤ w ≤ 1 and fw (w) = 1 − |w| for − 1 ≤ w ≤ +1 n o −.010 4.4 µc = .010; σc2 = (.005)2 , Pr{C < 0} = Pr C−.010 < = Pr{Z < −2} = .0228. .005 .005 R ∞ R ∞ −3y −2x 4.5 Pr{Z < Y} = 0 dy 2e dx = 52 . x 3e for

5.1 Pr{N > k} = Pr {X1 ≤ ξ, . . . , Xk ≤ ξ } = [F(ξ )]k , k = 0, 1, . . . Pr{N = k} = Pr{N > k − 1} − Pr{N > k} = [1 − F(ξ )]F(ξ )k−1 , k = 1, 2, . . . 5.2 Pr{Z > z} = Pr {X1 > z, . . . , Xn > z} = Pr {X1 > z} · · · · · Pr {Xn > z} = e−λz · · · · · e−λz = e−nλz , z > 0. Z is exponentially distributed, parameter nλ. 5.3 Pr{X > k} =

∞ P

p(1 − p)l = p(1 − p)k+1 , k = 0, 1, . . .

I=k+1

E[X] =

∞ P k=0

Pr{X > k} =

1−p p .


Introduction to Stochastic Modeling 4th Edition Pinsky Solutions Manual Full Download: http://alibabadownload.com/product/introduction-to-stochastic-modeling-4th-edition-pinsky-solutions-manual/ 6

Instructor Solutions Manual

+ − 5.4 Write V = V + − V − when V + = max{V, 0} and V − = max{−V, 0}. Then Pr V > v = 1 − F (v) and Pr V > v v + − + = + − Fv (−v) for v > 0. Use (5.3) on V and V together with E[V] = E V − E V . Mean does not exit if E V = E V − = ∞. R∞ R∞ R∞ √ √ 5.5 E W 2 = 0 P W 2 > t dt = 0 1 − Fw t dt = 0 2y [1 − Fw (y)] dy by letting y = t. R∞ R∞ R∞ 5.6 Pr{V > t} = t λe−λv dv = e−λt ; E[V] = 0 Pr{V > t}dt = λ1 0 λe−λi dt = λ1 . 5.7 Pr{V > v} = Pr {X1 > v, . . . , Xn > v} = Pr {X1 > v} · · · · · Pr {Xn > v} = e−λ1 v · · · · · e−λn v = e−(λ1 +···+λn)v , v > 0. P V is exponentially distributed with parameter λi . 5.8 Spares

3

2

1

0

1 2λ

1 2λ

1 2λ

1 2λ

A B Mean

Expected flash light operating duration =

1 2λ

1 1 1 + 2λ + 2λ + 2λ =

2 λ

= 2 Expected battery operating durations!

This sample only, Download all chapters at: alibabadownload.com


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.