Nothing Special   »   [go: up one dir, main page]

Telenook

Download as pdf or txt
Download as pdf or txt
You are on page 1of 639

TELETRAFFIC ENGINEERING and NETWORK PLANNING

Villy B. Iversen

DTU Course 34340


http://www.fotonik.dtu.dk

Technical University of Denmark Building 343 DK2800 Kgs. Lyngby vbiv@fotonik.dtu.dk Revised May 20, 2010

ii

Villy Bk Iversen, 2010

iii

PREFACE

This book covers the basic theory of teletrac engineering. The mathematical background required is elementary probability theory. The purpose of the book is to enable engineers to understand ITUT recommendations on trac engineering, evaluate tools and methods, and keep up-to-date with new practices. The book includes the following parts: Introduction: Chapter 1, Mathematical background: Chapter 2 3, Telecommunication loss models: Chapter 4 8, Data communication delay models: Chapter 9 12, Measurement and simulation: Chapter 13. The purpose of the book is twofold: to serve both as a handbook and as a textbook. Thus the reader should, for example, be able to study chapters on loss models without studying the chapters on the mathematical background rst. The book is based on many years of experience in teaching the subject at the Technical University of Denmark and from ITU training courses in developing countries. Supporting material, such as software, exercises, advanced material, and case studies, is available at: http://oldwww.com.dtu.dk/education/34340 where comments and ideas will also be appreciated.

Villy Bk Iversen May 20, 2010

iv

Contents
1 Introduction to Teletrac Engineering 1.1 Modeling of telecommunication systems . . . . . . . . . . . . . . . . . . . . . 1.1.1 1.1.2 1.1.3 1.1.4 1.2 1.2.1 1.2.2 1.2.3 1.3 1.3.1 1.3.2 1.4 System structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operational strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistical properties of trac . . . . . . . . . . . . . . . . . . . . . . . Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cellular systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 3 3 5 5 6 7 8 9 9

Conventional telephone systems . . . . . . . . . . . . . . . . . . . . . . . . . .

Wireless communication systems . . . . . . . . . . . . . . . . . . . . . . . . .

Wireless Broadband Systems . . . . . . . . . . . . . . . . . . . . . . . 11 Service classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Communication networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4.1 1.4.2 1.4.3 Classical telephone network . . . . . . . . . . . . . . . . . . . . . . . . 13 Data networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Local Area Networks (LAN) . . . . . . . . . . . . . . . . . . . . . . . 16

1.5 1.6 1.7 1.8 1.9

ITU recommendations on trac engineering . . . . . . . . . . . . . . . . . . . 17 1.5.1 Trac engineering in the ITU . . . . . . . . . . . . . . . . . . . . . . . 18 Trac concepts and grade of service . . . . . . . . . . . . . . . . . . . . . . . 18 Concept of trac and trac unit [erlang] . . . . . . . . . . . . . . . . . . . . 20 Trac variations and the concept busy hour . . . . . . . . . . . . . . . . . . . 23 The blocking concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.10 Trac generation and subscribers reaction . . . . . . . . . . . . . . . . . . . . 29 1.11 Introduction to Grade-of-Service = GoS . . . . . . . . . . . . . . . . . . . . . 35 1.11.1 Comparison of GoS and QoS . . . . . . . . . . . . . . . . . . . . . . . 36

vi

CONTENTS 1.11.2 Special features of QoS . . . . . . . . . . . . . . . . . . . . . . . . . . 37 1.11.3 Network performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 1.11.4 Reference congurations . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2 Time interval modeling 2.1 2.2 2.1.1 2.2.1 2.2.2 2.2.3 2.2.4 2.2.5 2.3 2.3.1

41

Distribution functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Exponential distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Residual life-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Load from holding times of duration less than x . . . . . . . . . . . . . 50 Forward recurrence time . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Distribution of the jth largest of k random variables . . . . . . . . . . 53 Random variables in series . . . . . . . . . . . . . . . . . . . . . . . . 54 Hypo-exponential or steep distributions . . . . . . . . . . . . . . . . . 55 Erlang-k distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.3.2 Random variables in parallel . . . . . . . . . . . . . . . . . . . . . . . 58 Hyper-exponential or at distributions . . . . . . . . . . . . . . . . . . 59 Hyper-exponential distribution . . . . . . . . . . . . . . . . . . . . . . 60 Pareto distribution and Palms normal forms . . . . . . . . . . . . . . 62 2.3.3 Random variables in series and parallel . . . . . . . . . . . . . . . . . 63 Stochastic sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Cox distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Polynomial trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Decomposition principles . . . . . . . . . . . . . . . . . . . . . . . . . 68 Importance of Cox distribution . . . . . . . . . . . . . . . . . . . . . . 70 Characteristics of distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Combination of random variables . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.4

Other time distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Gamma distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Weibull distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Heavy-tailed distributions . . . . . . . . . . . . . . . . . . . . . . . . . 72

2.5

Observations of life-time distribution . . . . . . . . . . . . . . . . . . . . . . . 72 75

3 Arrival Processes 3.1 3.1.1

Description of point processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Basic properties of number representation . . . . . . . . . . . . . . . . 77

CONTENTS 3.1.2 3.2 3.2.1 3.2.2 3.2.3 3.3 3.4 3.5

vii Basic properties of interval representation . . . . . . . . . . . . . . . . 78 Stationarity (Time homogeneity) . . . . . . . . . . . . . . . . . . . . . 80 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Simplicity or ordinarity . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Characteristics of point process . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Littles theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Characteristics of the Poisson process . . . . . . . . . . . . . . . . . . . . . . 83 Distributions of the Poisson process . . . . . . . . . . . . . . . . . . . . . . . 84 3.5.1 3.5.2 3.5.3 3.5.4 Exponential distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Erlangk distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Static derivation of the distributions of the Poisson process . . . . . . 91 Palms theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Raikovs theorem (Decomposition theorem) . . . . . . . . . . . . . . . 95 Uniform distribution a conditional property . . . . . . . . . . . . . . 95 Interrupted Poisson process (IPP) . . . . . . . . . . . . . . . . . . . . 96 Batched Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . 98 101

3.6

Properties of the Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.6.1 3.6.2 3.6.3

3.7

Generalization of the stationary Poisson process . . . . . . . . . . . . . . . . . 96 3.7.1 3.7.2

4 Erlangs loss system and Bformula 4.1 4.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.2.1 4.2.2 4.2.3 State transition diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Derivation of state probabilities . . . . . . . . . . . . . . . . . . . . . . 105 Trac characteristics of the Poisson distribution . . . . . . . . . . . . 106 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Trac characteristics of Erlangs B-formula . . . . . . . . . . . . . . . 108 Recursion formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.3

Truncated Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.3.1 4.3.2

4.4 4.5 4.6

General procedure for state transition diagrams . . . . . . . . . . . . . . . . . 114 4.4.1 Evaluation of Erlangs B-formula . . . . . . . . . . . . . . . . . . . . . . . . . 116 Properties of Erlangs B-formula . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.6.1 4.6.2 4.6.3 Non-integral number of channels . . . . . . . . . . . . . . . . . . . . . 119 Insensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Derivatives of Erlang-B formula and convexity . . . . . . . . . . . . . 121

viii 4.6.4 4.6.5 4.6.6 4.6.7 4.7 4.8

CONTENTS Derivative of Erlang-B formula with respect to A . . . . . . . . . . . . 121 Derivative of Erlang-B formula with respect to n . . . . . . . . . . . . 122 Inverse Erlang-B formul . . . . . . . . . . . . . . . . . . . . . . . . . 123 Approximations for Erlang-B formula . . . . . . . . . . . . . . . . . . 124

Fry-Molinas Blocked Calls Held model . . . . . . . . . . . . . . . . . . . . . . 124 Principles of dimensioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.8.1 4.8.2 Dimensioning with xed blocking probability . . . . . . . . . . . . . . 126 Improvement principle (Moes principle) . . . . . . . . . . . . . . . . . 127 133

5 Loss systems with full accessibility 5.1 5.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.2.1 5.2.2 Equilibrium equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Trac characteristics of Binomial trac . . . . . . . . . . . . . . . . . 139 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Trac characteristics of Engset trac . . . . . . . . . . . . . . . . . . 142

5.3

Engset distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 5.3.1 5.3.2

5.4 5.5

Relations between E, B, and C . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Evaluation of Engsets formula . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5.5.1 5.5.2 5.5.3 Recursion formula on n . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Recursion formula on S . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Recursion formula on both n and S . . . . . . . . . . . . . . . . . . . . 149

5.6 5.7 5.8

Pascal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Truncated Pascal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Batched Poisson arrival process . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5.8.1 5.8.2 5.8.3 Innite capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Finite capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 161

6 Overow theory 6.1 6.2

Limited accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Exact calculation by state probabilities . . . . . . . . . . . . . . . . . . . . . . 163 6.2.1 6.2.2 Balance equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Erlangs ideal grading . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Upper limit of channel utilization . . . . . . . . . . . . . . . . . . . . . 167

CONTENTS 6.3 6.4

ix

Overow theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 6.3.1 6.4.1 6.4.2 6.4.3 6.4.4 State probabilities of overow systems . . . . . . . . . . . . . . . . . . 169 Preliminary analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Numerical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Individual stream blocking probabilities . . . . . . . . . . . . . . . . . 175 Individual group blocking probabilities . . . . . . . . . . . . . . . . . . 175 . . . . . . . . . . . . . . . . . . . . . . . . . 177 Trac splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 BPP trac models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Sanders method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Berkeleys method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Comparison of state-based methods . . . . . . . . . . . . . . . . . . . 182 Interrupted Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . 182 Cox2 arrival process . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 187 Equivalent Random Trac Method . . . . . . . . . . . . . . . . . . . . . . . . 171

6.5 6.6

Fredericks & Haywards method 6.5.1 6.6.1 6.6.2 6.6.3 6.6.4

Other methods based on state space . . . . . . . . . . . . . . . . . . . . . . . 179

6.7

Methods based on arrival processes . . . . . . . . . . . . . . . . . . . . . . . . 182 6.7.1 6.7.2

7 Multi-Dimensional Loss Systems 7.1 7.2 7.3

Multi-dimensional Erlang-B formula . . . . . . . . . . . . . . . . . . . . . . . 187 Reversible Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Multi-Dimensional Loss Systems . . . . . . . . . . . . . . . . . . . . . . . . . 193 7.3.1 7.3.2 7.3.3 Class limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Generalized trac processes . . . . . . . . . . . . . . . . . . . . . . . . 194 Multi-rate trac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 The convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . 201

7.4 7.5 7.6

Convolution Algorithm for loss systems . . . . . . . . . . . . . . . . . . . . . 200 7.4.1 Fredericks-Haywardss method . . . . . . . . . . . . . . . . . . . . . . . . . . 208 State space based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.6.1 7.6.2 7.6.3 Fortet & Grandjean (Kaufman & Robert) algorithm . . . . . . . . . . 211 Generalized algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Batch Poisson arrival process . . . . . . . . . . . . . . . . . . . . . . . 216

7.7

Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 217

8 Dimensioning of telecom networks

x 8.1 8.2 8.3 8.4 8.5 8.6

CONTENTS Trac matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 8.1.1 Kruithofs double factor method . . . . . . . . . . . . . . . . . . . . . 218 Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Routing principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Approximate end-to-end calculations methods . . . . . . . . . . . . . . . . . . 221 8.4.1 8.5.1 8.6.1 8.6.2 8.7 8.7.1 8.7.2 Fix-point method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Trunk reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Virtual channel protection . . . . . . . . . . . . . . . . . . . . . . . . . 224 Balancing marginal costs . . . . . . . . . . . . . . . . . . . . . . . . . 225 Optimum carried trac . . . . . . . . . . . . . . . . . . . . . . . . . . 226 229 Exact end-to-end calculation methods . . . . . . . . . . . . . . . . . . . . . . 222 Load control and service protection . . . . . . . . . . . . . . . . . . . . . . . . 222

Moes principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

9 Markovian queueing systems 9.1 9.2

Erlangs delay system M/M/n . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Trac characteristics of delay systems . . . . . . . . . . . . . . . . . . . . . . 232 9.2.1 9.2.2 9.2.3 Erlangs C-formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Numerical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Mean queue lengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Mean queue length at a random point of time . . . . . . . . . . . . . . 235 Mean queue length, given the queue is greater than zero . . . . . . . . 237 9.2.4 Mean waiting times . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Mean waiting time W for all customers . . . . . . . . . . . . . . . . . 237 Mean waiting time w for delayed customers . . . . . . . . . . . . . . . 238 9.2.5 Improvement functions for M/M/n . . . . . . . . . . . . . . . . . . . . 238

9.3 9.4 9.5 9.6

Moes principle for delay systems . . . . . . . . . . . . . . . . . . . . . . . . . 239 Waiting time distribution for M/M/n, FCFS . . . . . . . . . . . . . . . . . . 240 Single server queueing system M/M/1 . . . . . . . . . . . . . . . . . . . . . . 243 9.5.1 9.6.1 9.6.2 9.6.3 9.6.4 Sojourn time for a single server . . . . . . . . . . . . . . . . . . . . . . 244 Terminal systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 State probabilities single server . . . . . . . . . . . . . . . . . . . . . 247 Terminal states and trac characteristics . . . . . . . . . . . . . . . . 249 Machinerepair model with n servers . . . . . . . . . . . . . . . . . . . 253 Palms machine repair model . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

CONTENTS 9.7 9.8

xi

Optimizing the machine-repair model . . . . . . . . . . . . . . . . . . . . . . . 254 Waiting time distribution for M/M/n/S/SFCFS . . . . . . . . . . . . . . . . 256 261

10 Applied Queueing Theory

10.1 Kendalls classication of queueing models . . . . . . . . . . . . . . . . . . . . 262 10.1.1 Description of trac and structure . . . . . . . . . . . . . . . . . . . . 262 10.1.2 Queueing strategy: disciplines and organization . . . . . . . . . . . . . 263 10.1.3 Priority of customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 10.2 General results in the queueing theory . . . . . . . . . . . . . . . . . . . . . . 265 10.2.1 Load function and work conservation . . . . . . . . . . . . . . . . . . . 266 10.3 Pollaczek-Khintchines formula for M/G/1 . . . . . . . . . . . . . . . . . . . . 267 10.3.1 Derivation of Pollaczek-Khintchines formula . . . . . . . . . . . . . . 267 10.3.2 Busy period for M/G/1 . . . . . . . . . . . . . . . . . . . . . . . . . . 269 10.3.3 Moments of M/G/1 waiting time distribution . . . . . . . . . . . . . . 270 10.3.4 Limited queue length: M/G/1/k . . . . . . . . . . . . . . . . . . . . . 270 10.4 Queueing systems with constant holding times . . . . . . . . . . . . . . . . . 271 10.4.1 Historical remarks on M/D/n . . . . . . . . . . . . . . . . . . . . . . . 271 10.4.2 State probabilities of M/D/1 . . . . . . . . . . . . . . . . . . . . . . . 272 10.4.3 Mean waiting times and busy period of M/D/1 . . . . . . . . . . . . . 273 10.4.4 Waiting time distribution: M/D/1, FCFS . . . . . . . . . . . . . . . . 274 10.4.5 State probabilities: M/D/n . . . . . . . . . . . . . . . . . . . . . . . . 276 10.4.6 Waiting time distribution: M/D/n, FCFS . . . . . . . . . . . . . . . . 276 10.4.7 Erlang-k arrival process: Ek /D/r . . . . . . . . . . . . . . . . . . . . . 277 10.4.8 Finite queue system: M/D/1/k . . . . . . . . . . . . . . . . . . . . . . 278 10.5 Single server queueing system: GI/G/1 . . . . . . . . . . . . . . . . . . . . . . 279 10.5.1 General results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 10.5.2 State probabilities: GI/M/1 . . . . . . . . . . . . . . . . . . . . . . . . 280 10.5.3 Characteristics of GI/M/1 . . . . . . . . . . . . . . . . . . . . . . . . . 282 10.5.4 Waiting time distribution: GI/M/1, FCFS . . . . . . . . . . . . . . . . 283 10.6 Priority queueing systems: M/G/1 . . . . . . . . . . . . . . . . . . . . . . . . 283 10.6.1 Combination of several classes of customers . . . . . . . . . . . . . . . 284 10.6.2 Kleinrocks conservation law . . . . . . . . . . . . . . . . . . . . . . . 285 10.6.3 Non-preemptive queueing discipline . . . . . . . . . . . . . . . . . . . . 285 10.6.4 SJF-queueing discipline: M/G/1 . . . . . . . . . . . . . . . . . . . . . 288 10.6.5 M/M/n with non-preemptive priority . . . . . . . . . . . . . . . . . . 290 10.6.6 Preemptive-resume queueing discipline . . . . . . . . . . . . . . . . . . 291

xii

CONTENTS 10.6.7 M/M/n with preemptive-resume priority . . . . . . . . . . . . . . . . . 293 10.7 Fair Queueing: Round Robin, Processor-Sharing . . . . . . . . . . . . . . . . 293

11 Multi-service queueing systems

295

11.1 Reversible multi-chain single-server systems . . . . . . . . . . . . . . . . . . . 296 11.1.1 Reduction factors for single-server system . . . . . . . . . . . . . . . . 296 11.1.2 Single-server Processor Sharing (PS) system . . . . . . . . . . . . . . . 300 11.1.3 Non-sharing single-server system . . . . . . . . . . . . . . . . . . . . . 301 11.1.4 Single-server LCFS-PR system . . . . . . . . . . . . . . . . . . . . . . 301 11.1.5 Summary for reversible single server systems . . . . . . . . . . . . . . 302 11.1.6 State probabilities for multi-services single-server system . . . . . . . . 302 11.1.7 Generalized algorithm for state probabilities . . . . . . . . . . . . . . . 304 11.1.8 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 11.2 Reversible multi-{chain & server} systems . . . . . . . . . . . . . . . . . . . . 305 11.2.1 Reduction factors for multi-server systems . . . . . . . . . . . . . . . . 305 11.2.2 Generalized processor sharing (GPS) system . . . . . . . . . . . . . . . 308 11.2.3 Non-sharing multi-{chain & server} system . . . . . . . . . . . . . . . 308 11.2.4 Symmetric queueing systems . . . . . . . . . . . . . . . . . . . . . . . 309 11.2.5 State probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 11.2.6 Generalized algorithm for state probabilities . . . . . . . . . . . . . . . 311 11.2.7 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 11.3 Reversible multi-{rate & chain & server} systems . . . . . . . . . . . . . . . . 312 11.3.1 Reduction factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 11.3.2 Generalized algorithm for state probabilities . . . . . . . . . . . . . . . 315 11.4 Finite source models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 12 Queueing networks 321

12.1 Introduction to queueing networks . . . . . . . . . . . . . . . . . . . . . . . . 322 12.2 Symmetric (reversible) queueing systems . . . . . . . . . . . . . . . . . . . . . 322 12.3 Open networks: single chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 12.3.1 Kleinrocks independence assumption . . . . . . . . . . . . . . . . . . . 327 12.4 Open networks: multiple chains . . . . . . . . . . . . . . . . . . . . . . . . . . 328 12.5 Closed networks: single chain . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 12.5.1 Convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 12.5.2 MVAalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 12.6 BCMP multi-chain queueing networks . . . . . . . . . . . . . . . . . . . . . . 336

CONTENTS

xiii

12.6.1 Convolution algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 12.7 Other algorithms for queueing networks . . . . . . . . . . . . . . . . . . . . . 341 12.8 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 12.9 Optimal capacity allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 13 Trac measurements 345

13.1 Measuring principles and methods . . . . . . . . . . . . . . . . . . . . . . . . 346 13.1.1 Continuous measurements . . . . . . . . . . . . . . . . . . . . . . . . . 346 13.1.2 Discrete measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 13.2 Theory of sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 13.3 Continuous measurements in an unlimited period . . . . . . . . . . . . . . . . 350 13.4 Scanning method in an unlimited time period . . . . . . . . . . . . . . . . . . 353 13.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Bibliography Author index Subject index Exercises Tables 361 370 372 379 608

xiv

CONTENTS

Notations
a A A B B c C Cn d D E E1,n (A) = E1 E2,n (A) = E2 F g h H(k) I J (z) k K L Lk L m mi mi mr M n N p(i) p{i, t | j, t0 } Oered trac per source Oered trac = Ao Lost trac Call congestion Burstiness Constant Trac congestion = load congestion Catalans number Slot size in multi-rate trac Probability of delay or Deterministic arrival or service process Time congestion Erlangs Bformula = Erlangs 1. formula Erlangs Cformula = Erlangs 2. formula Improvement function Number of groups Constant time interval or service time PalmJacobus formula Inverse time congestion I = 1/E Modied Bessel function of order Accessibility = hunting capacity Maximum number of customers in a queueing system Number of links in a telecommunication network or number of nodes in a queueing network Mean queue length Mean queue length when the queue is greater than zero Random variable for queue length Mean value (average) = m1 ith (non-central) moment ith central moment Mean residual life time Poisson arrival process Number of servers (channels) Number of trac streams or trac types State probabilities, time averages Probability for state i at time t given state j at time t0

CONTENTS

xv

P (i) q(i) Q(i) Q r R s S t T U V w W W x X y Y Z i (i) (i) 2

Cumulated state probabilities P (i) = i x= p(x) Relative (non normalized) state probabilities Cumulated values of q(i): Q(i) = i x= q(x) Normalization constant Reservation parameter (trunk reservation) Mean response time Mean service time Number of trac sources Time instant Random variable for time instant Load function Virtual waiting time Mean waiting time for delayed customers Mean waiting time for all customers Random variable for waiting time Variable Random variable Utilization = mean carried trac per channel, yi = trac carried by channel i Total carried trac Peakedness Carried trac per source Oered trac per idle source Arrival rate for an idle source Palms form factor Lagrange-multiplier ith cumulant Arrival rate of a Poisson process Total arrival rate to a system Service rate, inverse mean service time State probabilities, arriving customer mean values State probabilities, departing customer mean values Service ratio Variance, = standard deviation Time-out constant or constant time-interval

xvi

CONTENTS

Chapter 1 Introduction to Teletrac Engineering


Teletrac theory is dened as the application of probability theory to the solution of problems concerning planning, performance evaluation, operation, and maintenance of telecommunication systems. More generally, teletrac theory can be viewed as a discipline of planning where the tools (stochastic processes, queueing theory and numerical simulation) are taken from the disciplines of operations research. The term teletrac covers all kinds of data communication trac and telecommunication trac. The theory will primarily be illustrated by examples from telephone and data communication systems. The tools developed are, however, independent of the technology and applicable within other areas such as road trac, air trac, manufacturing, distribution, workshop and storage management, and all kinds of service systems. The objective of teletrac theory can be formulated as follows: to make the trac measurable in well dened units through mathematical models and to derive relationships between grade-of-service and system capacity in such a way that the theory becomes a tool by which investments can be planned. The task of teletrac theory is to design systems as cost eective as possible with a predened grade of service when we know the future trac demand and the capacity of system elements. Furthermore, it is the task of teletrac engineering to specify methods for controlling that the actual grade of service is fullling the requirements, and also to specify emergency actions when systems are overloaded or technical faults occur. This requires methods for forecasting the demand (for instance based on trac measurements), methods for calculating the capacity of the systems, and specication of quantitative measures for the grade of service. When applying the theory in practice, a series of decision problems concerning both short term as well as long term arrangements occur. Short term decisions include for example the determination of the number of channels in a

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

base station of a cellular network, the number of operators in a call center, the number of open lanes in the supermarket, and the allocation of priorities to jobs in a computer system. Long term decisions include decisions concerning the development and extension of data- and telecommunication networks, extension of cables, radio links, establishing a new base station, etc. The application of the theory for design of new systems can help in comparing dierent solutions and thus eliminate non-optimal solutions at an early stage without having to implement prototypes.

1.1

Modeling of telecommunication systems

For the analysis of a telecommunication system, a model of the system considered must be set up. Especially for applications of the teletrac theory to new systems, this modeling process is of fundamental importance. It requires knowledge of both the technical system, available mathematical tools, and the implementation of the model in a computer. Such a model contains three main elements (Fig. 1.1): the system structure, the operational strategy, and the statistical properties of the trac.

MAN
Stochastic

Traffic
User demands

MACHINE
Deterministic

Structure
Hardware

Strategy
Software

Figure 1.1: Telecommunication systems are complex man/machine systems. The task of teletrac theory is to congure optimal systems from knowledge of user requirements and behavior.

1.1. MODELING OF TELECOMMUNICATION SYSTEMS

1.1.1

System structure

This part is technically determined and it is in principle possible to obtain any level of details in the description, e.g. at component level. Reliability aspects are random processes as failures occur more or less at random, and they can be dealt with as trac with highest priority. The system structure is given by hardware and software which is described in manuals. In road trac systems, roads, trac signals, roundabouts, etc. make up the structure.

1.1.2

Operational strategy

A given physical system can be used in dierent ways in order to adapt the system to the trac demand. In road trac, it is implemented with trac rules and strategies which may adapt to trac variations during the day. In a computer, this adaption takes place by means of the operating system and by operator interference. In a telecommunication system, strategies are applied in order to give priority to call attempts and in order to route the trac to the destination. In Stored Program Controlled (SPC) telephone exchanges, the tasks assigned to the central processor are divided into classes with dierent priorities. The highest priority is given to calls already accepted, followed by new call attempts whereas routine control of equipment has lower priority. The classical telephone systems used wired logic in order to introduce strategies while in modern systems it is done by software, enabling more exible and adaptive strategies.

1.1.3

Statistical properties of trac

User demands are modeled by statistical properties of the trac. It is only possible to validate that a mathematical models is in agreement with reality by comparing results obtained from the model with measurements on real systems. This process must necessarily be of iterative nature (Fig. 1.2). A mathematical model is build up from a thorough knowledge of the trac. Properties are then derived from the model and compared to measured data. If they are not in satisfactory agreement, a new iteration of the process must take place. It appears natural to split the description of the trac properties into random processes for arrival of call attempts and processes describing service (holding) times. These two processes are usually assumed to be mutually independent, meaning that the duration of a call is independent of the time the call arrive. Models also exists for describing the behavior of users (subscribers) experiencing blocking, i.e. they are refused service and may make a new call attempt a little later (repeated call attempts). Fig. 1.3 illustrates the terminology usually applied in the teletrac theory.

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

Observation Model Deduction Data Verification


Figure 1.2: Teletrac theory is an inductive discipline. From observations of real systems we establish theoretical models, from which we derive parameters, which can be compared with corresponding observations from the real system. If there is agreement, the model has been validated. If not, then we have to elaborate the model further. This scientic way of working is called the research loop.

Inter-arrival time

Busy
Holding time Idle time

Idle
Arrival time Departure time

Time

Figure 1.3: Illustration of the terminology applied for a trac process. Notice the dierence between time intervals and instants of time. We use the terms arrival, call, and connection synonymously. The inter-arrival time, respectively the inter-departure time, are the time intervals between arrivals, respectively departures.

1.2. CONVENTIONAL TELEPHONE SYSTEMS

1.1.4

Models

General requirements to an engineering model are: 1. It must without major diculties be possible to verify the model and to determine the model parameters from observed data. 2. It must be feasible to apply the model for practical dimensioning. We are looking for a description of for example the variations observed in the number of ongoing established calls in a telephone exchange, which changes incessantly due to calls being established and terminated. Even though common habits of subscribers imply that daily variations follows a predictable pattern, it is impossible to predict individual call attempts or duration of individual calls. In the description, it is therefore necessary to use statistical methods. We say that call attempt events take place according to a random (= stochastic) process, and the inter arrival times between call attempts is described by probability distributions which characterize the random process. We may classify models into three classes: 1. Mathematical models which are general, but often approximate. We may optimize the parameters analytically or numerically. 2. Simulation models where we may use either measured data or articial data from statistical distributions. It is more resource demanding to work with simulation models as yhey are not very general. Every individual case must be simulated. 3. Physical models (prototypes) are even much more time and resource consuming than a simulation model. In general mathematical models are therefore preferred but often it is necessary to apply simulation to develop the mathematical model. Sometimes prototypes are developed for ultimate testing.

1.2

Conventional telephone systems

This section gives a short description on what happens when a call attempt arrives to a traditional telephone central. We divide the description into three parts: structure, strategy and trac. It is common practice to distinguish between subscriber exchanges (access switches, local exchanges (LEX )) and transit exchanges (TEX ) due to the hierarchical structure according to which most national telephone networks are designed. Subscribers are connected to local exchanges or to access switches (concentrators), which are connected to local exchanges. Finally, transit switches are used to interconnect local exchanges or to increase the availability and reliability.

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

1.2.1

System structure

Let us consider a historical telephone exchange of the crossbar type. Even though this type has been taken out of service, a description of its functionality gives a good illustration on the tasks which need to be solved in a digital exchange. The equipment in a conventional telephone exchange consists of voice paths and control paths (Fig. 1.4).
Subscriber Stage Group Selector Junctor Subscriber

Voice Paths

Processor

Processor

Control Paths

Processor

Register

Figure 1.4: Fundamental structure of a switching system. The voice paths are occupied during the whole duration of the call (on the average 23 minutes) while the control paths only are occupied during the phase of call establishment (in range 0.1 to 1 s). The number of voice paths is therefore considerable larger than the number of control paths. The voice path is a connection from a given inlet (subscriber) to a given outlet. In a space division system the voice paths consists of passive component (like relays, diodes or VLSI circuits). In a time division system the voice paths consist of specic time-slots within a frame. The control paths are responsible for establishing the connection. Usually, this happens in a number of steps where each step is performed by a control device: a microprocessor, or a register (originally a human operator). Tasks of the control device are: Identication of the originating subscriber (who wants a connection (inlet)). Reception of the digit information (address, outlet). Search after an idle connection between inlet and outlet. Establishment of the connection. Release of the connection when the conversation ends (performed sometimes by the voice path itself).

1.2. CONVENTIONAL TELEPHONE SYSTEMS

In addition the charging of the calls must also be taken care of. In conventional exchanges the control path is build up on relays and/or electronic devices and the logical operations are implemented by wired logic. Changes in functions require hardware modications which are complex and expensive. In digital exchanges the control devices are processors. The logical functions are carried out by software, and changes are much easier to implement. The restrictions are far less constraining, as well as the complexity of the logical operations compared to the wired logic. Software controlled exchanges are also called SPC-systems (Stored Program Controlled systems).

1.2.2

User behaviour

We still consider a conventional telephone system. When an A-subscriber initiates a call, the hook is taken o and the wired pair to the subscriber is short-circuited. This triggers a relay at the exchange. The relay identies the subscriber and a micro processor in the subscriber stage choose an idle cord. The subscriber line and the cord are connected through a switching stage. This terminology originates from a the time when a manual operator by means of the cord was connected to the subscriber. A manual operator corresponds to a register. The cord has three outlets. A register is through another switching stage connexcted to the cord. Thereby the subscriber line is connected to a register (via the register selector) via the cord. This phase takes less than one second. The register sends the dial tone to the A-subscriber who dials the digits of the telephone number of the B-subscriber; the digits are received and stored by the register. The duration of this phase depends on the subscriber. A microprocessor analyzes the digit information and by means of a group selector establishes a connection to the desired subscriber. It can be a subscriber at same exchange, at a neighbour exchange or a remote exchange. It is common to distinguish between exchanges to which a direct link exists, and exchanges for which this is not the case. In the latter case a connection must go through an exchange at a higher level in the hierarchy. The digit information is delivered by means of a code transmitter to the code receiver of the desired exchange which then transmits the information to the registers of the exchange. The register has now fullled its obligations and is released so it is idle for the service of new call attempts. The microprocessors work very fast (around 110 ms) and independently of the subscribers. The cord is occupied during the whole duration of the call and takes control of the call when the register is released. It takes care of dierent types of signals (busy, reference, etc), charging information, and release of the connection when the call is put down, etc. It happens that a call does not pass on as planned. The subscriber may make an error,

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

suddenly hang up, etc. Furthermore, the system has a limited capacity. This will be dealt with in Chap. 1.6. Call attempts towards a subscriber take place in approximately the same way. A code receiver at the exchange of the B-subscriber receives the digits and a connection is set up through the group switching stage and the local switch stage through the B-subscriber with use of the registers of the receiving exchange.

1.2.3

Operation strategy

The voice path normally works as loss systems while the control path works as delay systems (Sec. 1.6). If there is not both an idle cord as well as an idle register then the subscriber will get no dial tone no matter how long he/she waits. If there is no idle outlet from the exchange to the desired B-subscriber a busy tone will be sent to the calling A-subscriber. Independently of any additional waiting there will not be established any connection. If a microprocessor (or all microprocessors of a specic type when there are more than one) is busy, then the call will wait until the microprocessor becomes idle. Due to the very short holding time the waiting time will often be so short that the subscribers do not notice anything. If several subscribers are waiting for the same microprocessor, they will usually be served in random order independent of the time of arrival. The way by which control devices of the same type and the cords share the work is often cyclic, such that they get approximately the same number of call attempts. This is an advantage since this ensures the same amount of wear and since a subscriber only rarely will get a defect cord or control path again if the call attempt is repeated. If a control path is occupied longer than a given time, a forced disconnection of the call will take place. This makes it impossible for a single call to block vital parts of the exchange, e.g. a register. It is also only possible to generate the ringing tone for a limited duration of time towards a B-subscriber and thus block this telephone a limited time at each call attempt. An exchange must be able to operate and function independently of subscriber behaviour. The cooperation between the dierent parts takes place in accordance to strict and well dened rules, called protocols, which in conventional systems is determined by the wired logic and in software controlled systems by software logic. The digital systems (e.g. ISDN = Integrated Services Digital Network, where the whole telephone system is digital from subscriber to subscriber (2 B + D = 2 64 + 16 Kbps per subscriber), ISDN = N-ISDN = Narrow-band ISDN) of course operates in a way dierent from the conventional systems described above. However, the fundamental teletrac tools for evaluation are the same in both systems. The same also covers the future broadband systems BISDN which are based on ATM = Asynchronous Transfer Mode and MPLS (Multi Protocol Label Switching).

1.3. WIRELESS COMMUNICATION SYSTEMS

1.3

Wireless communication systems

A tremendous expansion is seen these years in mobile communication systems where the transmission medium is either analogue or digital radio channels (wireless) instead of conventional wired systems. The electro magnetic frequency spectrum is divided into dierent bands reserved for specic purposes. For mobile communications a subset of these bands are reserved. Each band corresponds to a limited number of radio telephone channels, and it is here the limited resource is located in mobile communication systems. The optimal utilization of this resource is a main issue in the cellular technology. In the following subsection a representative system is described.

1.3.1

Cellular systems

Structure. When a certain geographical area is to be supplied with mobile telephony, a suitable number of base stations must be put into operation in the area. A base station is an antenna with transmission/receiving equipment or a radio link to a mobile telephone exchange (MTX ) which are part of the traditional telephone network. A mobile telephone exchange is common to all the base stations in a given trac area. Radio waves are attenuated when they propagate in the atmosphere and a base station is therefore only able to cover a limited geographical area which is called a cell (not to be confused with ATM cells). By transmitting the radio waves at adequate power it is possible to adapt the coverage area such that all base stations covers the planned trac area without too much overlapping between neighbour stations. It is not possible to use the same radio frequency in two neighbour base stations but in two base stations without a common border the same frequency can be used thereby allowing the channels to be reused. In Fig. 1.5 an example is shown. A certain number of channels per cell corresponding to a given trac volume is thereby made available. The size of the cell will depend on the trac volume. In densely populated areas as major cities the cells will be small while in sparsely populated areas the cells will be large. Frequency allocation is a complex problem. In addition to the restrictions given above, a number of other limitations also exist. For example, there has to be a certain distance (number of channels) between two channels on the same base station (neighbour channel restriction) and to avoid interference also other restrictions exist. Strategy. In mobile telephone systems a database with information about all the subscriber has to exist. Any subscriber is either active or passive corresponding to whether the radio telephone is switched on or o. When the subscriber turns on the phone, it is automatically assigned to a so-called control channel and an identication of the subscriber takes place. The control channel is a radio channel used by the base station for control. The remaining channels are trac channels

10

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING


                               

Figure 1.5: Cellular mobile communication system. By dividing the frequencies into 3 groups (A, B and C) they can be reused as shown.

A call request towards a mobile subscriber (B-subscriber) takes place in the following way. The mobile telephone exchange receives the call from the other subscriber (A-subscriber, xed or mobile). If the B-subscriber is passive (handset switched o) the A-subscriber is informed that the B-subscriber is non-available. Is the B-subscriber active, then the number is sent out on all control channels in the trac area. The B-subscriber recognizes his own number and informs via the control channel the system about the identity of the cell (base station) in which he is located. If an idle trac channel exists it is allocated and the MTX puts up the call. A call request from a mobile subscriber (A-subscriber) is initiated by the subscriber shifting from the control channel to a trac channel where the call is established. The rst phase with recording the digits and testing the accessibility of the B-subscriber is in some cases performed by the control channel (common channel signalling) A subscriber is able to move freely within his own trac area. When moving away from the base station this is detected by the MTX which constantly monitor the signal to noise ratio and the MTX moves the call to another base station and to another trac channel with better quality when this is required. This takes place automatically by cooperation between the MTX and the subscriber equipment, usually without being noticed by the subscriber. This operation is called hand over, and of course requires the existence of an idle trac channel in the new cell. Since it is improper to interrupt an existing call, hand-over calls are given higher priorities than new calls. This strategy can be implemented by reserving one or two idle channels for hand-over calls. When a subscriber is leaving its trac area, so-called roaming will take place. The MTX

B
 

C
 

C
                   

"

"

"

"

"

"

"

"

"

"

"

"

B
       

"

"

"

"

A
   

"

"

"

"

"

"

"

"

B
! !

      

C
             

   

A
! ! ! !

B
! !

B A B

1.3. WIRELESS COMMUNICATION SYSTEMS

11

in the new area is from the identity of the subscriber able to locate the home MTX of the subscriber. A message to the home MTX is forwarded with information on the new position. Incoming calls to the subscriber will always go to the home MTX which will then route the call to the new MTX. Outgoing calls will be taken care of the usual way. A widespread digital wireless system is GSM, which can be used throughout Western Europe. The International Telecommunication Union is working towards a global mobile system UPC (Universal Personal Communication), where subscribers can be reached worldwide (IMT2000). Paging systems are primitive one-way systems. DECT, Digital European Cord-less Telephone, is a standard for wireless telephones. They can be applied locally in companies, business centers etc. In the future equipment which can be applied both for DECT and GSM will come up. Here DECT corresponds to a system with very small cells while GSM is a system with larger cells. Satellite communication systems are also being planned in which low orbit satellites correspond to base stations. The rst such system Iridium, consisted of 66 satellites such that more than one satellite always were available at any given location within the geographical range of the system. The satellites have orbits only a few hundred kilometers above the Earth. Iridium was unsuccessful, but newer systems such as the Inmarsat system are now in operation.

1.3.2

Wireless Broadband Systems

In these systems we have an analogue high capacity channel, for example 10 Mhz, which is turned into a digital channel with a capacity up to 100 Mbps, depending on the coding scheme which again depends on the quality of the channel. The digital channel (media) is shared my many users according to a media access control (MAC) protocol. If all services have the same constant bandwidth demand we could split the digital channel up into many constant bit rate channels. This was done in classical systems by frequency division multiple access FDMA. Most data and multimedia services have variable bandwidth demand during the occupation time. Therefore, the digital channel in time is split up into time-slots, and we apply time division multiple access, FDMA. A certain number of time slots make up a frame, which is repeated innitely during time. Thus a time slot in each frame corresponding to the minimum bandwidth allocated. The information transmitted by a user is thus aggregated and transmitted in one or more slots in every frame. The frame size species the maximum delay the information experience due to the slotted time. Slot size and frame size should be specied according to the quality of service and restrictions from coding and switching mechanisms. One slot in a frame in TDMA thus corresponds to one channel in FDMA. The

12

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

advantage of TDMA is that we can change the allocation of slots from one frame to the next frame and thus reallocate the bandwidth resources very fast.

Service classes In most digital service-integrated systems we specify four services classes: two for real-time services and two for non-real-time services. Real-time services Constant bit-rate real time services. These services require a constant bandwidth. Examples are voice services as ISDN and VoIP (voice over IP). For this kind a services we have to reserve a xed number of slots in each frame. Variable bit-rate real time services. These services have a variable bandwidth demand. Examples are most data services. Also voice and video services with codecs (coder/decoder) having variable bit output. During each frame we allocate a certain capacity to a service. We may have restrictions upon the maximum number of slots, the average number of slots, etc. Non Real-time services Non real-time polling services. This is services which do not require real time transmission, but there may be restrictions on the minimum bandwidth allocated. The services ask for a certain number of slots, and the system allocate slots in each frame dependent on the number of idle slots. Best eort trac. This trac uses the remaining capacity left over from the other services. Also here we may guarantee a certain minimum bandwidth. This could for example be ftp-trac. By trac engineering we develop strategies for acceptance of connections, specify strategies for allocation of capacity to the classes, so that we can full the service level agreement (SLA) between user and operator. We also specify policing agreements to ensure that the user trac conform with the agreed parameters. The SLA species the quality-of-service (QoS) guaranteed by the operator. For each service there may be dierent levels of QoS, for example named Gold, Silver, and Bronze. A subscriber asking for Gold service will require more resources and also pay more for the service. The task of trac engineering is to both maximize the utilization of the resources and full the QoS requirements.

1.4

Communication networks

There exists dierent kinds of communications networks: telephone networks, data networks, Internet, etc. Today the telephone network is dominating and physically other networks will

1.4. COMMUNICATION NETWORKS

13

often be integrated in the telephone network. In future digital networks it is the plan to integrate a large number of services into the same network (ISDN, B-ISDN ).

1.4.1

Classical telephone network

The telephone network has traditionally been build up as a hierarchical system. The individual subscribers are connected to a subscriber switch or sometimes a local exchange (LEX ). This part of the network is called the access network. The subscriber switch is connected to a specic main local exchange which again is connected to a transit exchange (TEX ) of which there usually is at least one for each area code. The transit exchanges are normally connected into a mesh structure. (Fig. 1.6). These connections between the transit exchanges are called the hierarchical transit network. There exists furthermore connections between two local exchanges (or subscriber switches) belonging to dierent transit exchanges (local exchanges) if the trac demand is sucient to justify it.

Mesh network

Star network

Ring network

Figure 1.6: There are three basic structures of networks: mesh, star and ring. Mesh networks are applicable when there are few large exchanges (upper part of the hierarchy, also named polygon network), whereas star networks are proper when there are many small exchanges (lower part of the hierarchy). Ring networks are applied for example in bre optical systems. A connection between two subscribers in dierent transit areas will normally pass the following exchanges: USER LEX TEX TEX LEX USER The individual transit trunk groups are based on either analogue or digital transmission systems, and multiplexing equipment is often used. Twelve analogue channels of 3 kHz each make up one rst order bearer frequency system (frequency multiplex), while 32 digital channels of 64 Kbps each make up a rst order PCMsystem of 2.048 Mbps (pulse-code-multiplexing, time multiplexing).

14

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

The 64 Kbps are obtained from a sampling of the analogue signal at a rate of 8 kHz and an amplitude accuracy of 8 bit. Two of the 32 channels in a PCM system are used for signalling and control.
I I

Figure 1.7: In a telecommunication network all exchanges are typically arranged in a threelevel hierarchy. Local-exchanges or subscriber-exchanges (L), to which the subscribers are connected, are connected to main exchanges (T), which again are connected to inter-urban exchanges (I). An inter-urban area thus makes up a star network. The inter-urban exchanges are interconnected in a mesh network. In practice the two network structures are mixed, because direct trunk groups are established between any two exchanges, when there is sucient trac. Due to reliability and security there will almost always exist at least two disjoint paths between any two exchanges and the strategy will be to use the cheapest connections rst. The hierarchy in the Danish digital network is reduced to two levels only. The upper level with transit exchanges consists of a fully connected meshed network while the local exchanges and subscriber switches are connected to two or three dierent transit exchanges due to security and reliability. The telephone network is characterized by the fact that before any two subscribers can communicate, a full two-way (duplex) connection must be created, and the connection exists during the whole duration of the communication. This property is referred to as the telephone network being connection oriented as distinct from for example the Internet which is connection-less. Any network applying for example lineswitching or circuitswitching is connection oriented. A packet switching network may be either connection oriented (for example virtual connections in ATM ) or connection-less. In the discipline of network planning, the objective is to optimise network structures and trac routing under the consideration of trac demands, service and reliability requirement etc.
Example 1.4.1: VSAT-networks VSAT-networks (Maral, 1995 [85]) are for instance used by multi-national organizations for transmission of speech and data between dierent divisions of news-broadcasting, in case of disasters ,

1.4. COMMUNICATION NETWORKS

15

etc. It can be both point-to point connections and point to multi-point connections (distribution and broadcast). The acronym VSAT stands for Very Small Aperture Terminal (Earth station) which is an antenna with a diameter of 1.61.8 meter. The terminal is cheap and mobile. It is thus possible to bypass the public telephone network. The signals are transmitted from a VSAT terminal via a satellite towards another VSAT terminal. The satellite is in a xed position 35 786 km above equator and the signals therefore experiences a propagation delay of around 125 ms per hop. The available bandwidth is typically partitioned into channels of 64 Kbps, and the connections can be one-way or two-ways. In the simplest version, all terminals transmit directly to all others, and a full mesh network is the result. The available bandwidth can either be assigned in advance (xed assignment) or dynamically assigned (demand assignment). Dynamical assignment gives better utilization but requires more control. Due to the small parabola (antenna) and an attenuation of typically 200 dB in each direction, it is practically impossible to avoid transmission error, and error correcting codes and possible retransmission schemes are used. A more reliable system is obtained by introducing a main terminal (a hub) with an antenna of 4 to 11 meters in diameter. A communication takes place through the hub. Then both hops (VSAT hub and hub VSAT) become more reliable since the hub is able to receive the weak signals and amplify them such that the receiving VSAT gets a stronger signal. The price to be paid is that the propagation delay now is 500 ms. The hub solution also enables centralised control and monitoring of the system. Since all communication is going through the hub, the network structure constitutes a star topology. 2

1.4.2

Data networks

Data network are sometimes engineered according to the same principle as the telephone network except that the duration of the connection establishment phase is much shorter. Another kind of data network is given by packet switching network, which works according to the store-and-forward principle (see Fig. 1.8). The data to be transmitted are sent from transmitter to receiver in steps from exchange to exchange. This may create delays since the exchanges which are computers work as delay systems (connection-less transmission). If the packet has a maximum xed length, the network is denoted packet switching (e.g. X.25 protocol). In X.25 a message is segmented into a number of packets which do not necessarily follow the same path through the network. The protocol header of the packet contains a sequence number such that the packets can be arranged in correct order at the receiver. Furthermore error correction codes are used and the correctness of each packet is checked at the receiver. If the packet is correct an acknowledgement is sent back to the preceding node which now can delete its copy of the packet. If the preceding node does not receive any acknowledgement within some given time interval a new copy of the packet (or a whole frame of packets) are retransmitted. Finally, there is a control of the whole message from transmitter to receiver. In this way a very reliable transmission is obtained. If the whole message is sent in a single packet, it is denoted messageswitching.

16

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING


HOST HOST 5 2

3 1 4

HOST

HOST

Figure 1.8: Datagram network: Store- and forward principle for a packet switching data network. Since the exchanges in a data network are computers, it is feasible to apply advanced strategies for trac routing.

1.4.3

Local Area Networks (LAN)

Local area networks are a very specic but also very important type of data network where all users through a computer are attached to the same digital transmission system, e.g. a coaxial cable. Normally, only one user at a time can use the transmission medium and get some data transmitted to another user. Since the transmission system has a large capacity compared to the demand of the individual users, a user experiences the system as if he is the only user. There exist several types of local area networks. Applying adequate strategies for the medium access control (MAC) principle, the assignment of capacity in case of many users competing for transmission is taken care of. There exist two main types of Local Area Networks: CSMA/CD (Ethernet) and token networks. The CSMA/CD (Carrier Sense Multiple Access/Collision Detection) is the one most widely used. All terminals are all the time listening to the transmission medium and know when it is idle and when it is occupied. At the same time a terminal can see which packets are addressed to the terminal itself and therefore should be received and stored. A terminal wanting to transmit a packet transmits it if the medium is idle. If the medium is occupied the terminal wait a random amount of time

1.5. ITU RECOMMENDATIONS ON TRAFFIC ENGINEERING

17

before trying again. Due to the nite propagation speed, it is possible that two (or even more) terminals starts transmission within such a short time interval so that two or more messages collide on the medium. This is denoted as a collision. Since all terminals are listening all the time, they can immediately detect that the transmitted information is dierent from what they receive and conclude that a collision has taken place (CD = Collision Detection). The terminals involved immediately stops transmission and try again a random amount of time later (back-o). In local area network of the token type, it is only the terminal presently possessing the token which can transmit information. The token is circulating between the terminals according to predened rules. Local area networks based on the ATM technique are also in operation. Furthermore, wireless LANs are very common. The propagation is negligible in local area networks due to small geographical distance between the users. In for example a satellite data network the propagation delay is large compared to the length of the messages and in these applications other strategies than those used in local area networks are used.

1.5

ITU recommendations on trac engineering

The following section is based on ITUT draft Recommendation E.490.1: Overview of Recommendations on trac engineering. See also (Villen, 2002 [116]). The International Telecommunication Union (ITU) is an organization sponsored by the United Nations for promoting international telecommunications. It has three sectors: Telecommunication Standardization Sector (ITUT), Radio communication Sector (ITUR), and Telecommunication Development Sector (ITUD). The primary function of the ITUT is to produce international standards for telecommunications. The standards are known as recommendations. Although the original task of ITUT was restricted to facilitate international inter-working, its scope has been extended to cover national networks, and the ITUT recommendations are nowadays widely used as de facto national standards and as references. The aim of most recommendations is to ensure compatible inter-working of telecommunication equipment in a multi-vendor and multi-operator environment. But there are also recommendations that advice on best practices for operating networks. Included in this group are the recommendations on trac engineering.

18

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

The ITUT is divided into Study Groups. Study Group 2 (SG2) is responsible for Operational Aspects of Service Provision Networks and Performance. Each Study Group is divided into Working Parties.

1.5.1

Trac engineering in the ITU

Although Working Party 3/2 has the overall responsibility for trac engineering, some recommendations on trac engineering or related to it have been (or are being) produced by other Groups. Study Group 7 deals in the X Series with trac engineering for data communication networks, Study Group 11 has produced some recommendations (Q Series) on trac aspects related to system design of digital switches and signalling, and some recommendations of the I Series, prepared by Study Group 13, deal with trac aspects related to network architecture of N- and B-ISDN and IPbased networks. Within Study Group 2, Working Party 1 is responsible for the recommendations on routing and Working Party 2 for the Recommendations on network trac management. This section will focus on the recommendations produced by Working Party 3/2. They are in the E Series (numbered between E.490 and E.799) and constitute the main body of ITUT recommendations on trac engineering. The Recommendations on trac engineering can be classied according to the four major trac engineering tasks: Trac demand characterization; Grade of Service (GoS) objectives; Trac controls and dimensioning; Performance monitoring. The interrelation between these four tasks is illustrated in Fig. 1. The initial tasks in trac engineering are to characterize the trac demand and to specify the GoS (or performance) objectives. The results of these two tasks are input for dimensioning network resources and for establishing appropriate trac controls. Finally, performance monitoring is required to check if the GoS objectives have been achieved and is used as a feedback for the overall process.

1.6

Trac concepts and grade of service

The costs of a telephone system can be divided into costs which are dependent upon the number of subscribers and costs that are dependent upon the amount of trac in the system.

1.6. TRAFFIC CONCEPTS AND GRADE OF SERVICE


Traffic demand characterisation Grade of Service objectives

19

QoS requirements Traffic modelling Traffic measurement Endtoend GoS objectives Traffic forecasting Allocation to net work components

Traffic controls and dimensioning

Traffic controls

Dimensioning

Performance monitoring

Performance monitoring

Figure 1.9: Trac engineering tasks.

The goal when planning a telecommunication system is to adjust the amount of equipment so that variations in the subscriber demand for calls can be satised without noticeable inconvenience while the costs of the installations are as small as possible. The equipment must be used as eciently as possible. Teletrac engineering deals with optimization of the structure of the network and adjustment of the amount of equipment that depends upon the amount of trac. In the following some fundamental concepts are introduced and some examples are given to show how the trac behaves in real systems. All examples are from the telecommunication area.

20

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

1.7

Concept of trac and trac unit [erlang]

In teletrac theory we usually use the word trac to denote the trac intensity, i.e. trac per time unit. The term trac comes from Italian and means business. According to ITUT (1993 [39]) we have the following denition: Denition of Trac Intensity: The instantaneous trac intensity in a pool of resources is the number of busy resources at a given instant of time. Depending on the technology considered, the pool of resources correspond to a group of servers, lines, circuits, channels, trunks, computers, etc. The statistical moments (mean value, variance) of the trac intensity may be calculated for a given period of time T . For the average trac intensity we get: 1 Y (T ) = T
T

n(t) dt.
0

(1.1)

where n(t) denotes the number of occupied devices at the time t. Carried trac Y = Ac : This is called the trac carried by the group of servers during the time interval T (Fig. 1.10). In applications, the term trac intensity usually has the meaning of average trac intensity.

40

Number of busy channels

30

20

10

.... . . . ... . ... ... .. . . . . ... . .. ...... . . ... .. . .... . ... .. .. . .. . . . . . .. . . . ... . ......................... ..... . . . ............................................ . . ... . . . . ................... . . . . . . ... .... . .. . ..... . . . . ..... ... . .. ... . . . .... . . . . . . . . . . . .. .... .... ... ... . ... . . . . ... .. . . . . . .. . . .... . . . . . . .... .. ... .. .. . . . . .. .. .. .... . . . .. . . .. ... ... . . . . .. .. . .. . ... . . ... . ... . . .. . . ............................................ ... . . .. ... ..... ..... .. . . . . . .... . . .......................................... .. . . ..... . ... . . . ... . .. . . . .. . . . . .. . . . .. . . . . . .. . ..... . . .. . . . . .. ... . ... . . . . . . .. . . .. . .. . .... .. . .... . . . . . .. ........................................... .... . . ......................... ... ... .. . . . . . ................. . .. . . . . . .. . . ... .... .. .. . . .. . . . . .... . . . . . . . .. . . . .. . .. . ... . . . . . .. . . . . . . ...... . . . . . ... ....... .. . ... . . . . . . .. . . .... . .. . . ... .. .. . . . ..... . . ... .. ...... ... ... ..... . .. . . . . . . .. . . . ..... . .. . . .... . . . . . ... .. . .. . . .... ... . . .... . . . . .. .. ... ...... .... .. . ..... . ... . . . . . . .. . . . . . . . . . . . . . .. ............................................ . . ............... .......................... .. ............................................ ........................................... . . . .. . . . .... ........ . . . ... .. ... . . ... ....... .. . . .. ..... . . ... .. .... . . . . . ... . ... . ...... . . ... . ........ ... . . .. .. . . . ... ... ... .

n(t)

mean

.. . .. ................. .. .. .................

T ..............................................

Time

Figure 1.10: The carried trac (intensity) (= number of busy devices) as a function n(t) of time. For dimensioning purposes we use the average trac intensity during a period of time T (mean).

1.7. CONCEPT OF TRAFFIC AND TRAFFIC UNIT [ERLANG]

21

The ITU-T recommendation also species that the unit usually used for trac intensity is erlang (symbol E). This name was given to the trac unit in 1946 by CCIF (predecessor to CCITT and to ITU-T), in honor of the Danish mathematician A. K. Erlang (1878-1929), who was the founder of trac theory in telephony. The unit is dimensionless. The total trac carried in a time period T is a trac volume, and it is measured in erlanghours (Eh), or if more convenient for example erlangseconds. It is equal to the sum of all holding times inside the time period. The carried trac can never exceed the number of channels (lines). A channel can at most carry one erlang. The revenue is often proportional to the carried trac. Oered trac A: In mathematical models we use the concept oered trac. This is the trac which would be carried if no calls were rejected due to lack of capacity, i.e. if the number of servers were unlimited. The oered trac is a theoretical quantity which cannot be measured. It can only be estimated from the carried trac. Theoretically we operate with two parameters: 1. call intensity , which is the mean number of calls oered per time unit, and 2. mean service time s. The oered trac is equal to: A = s. (1.2) The parameters should be specied using the same time unit. From this equation it is seen that the unit of trac has no dimension. This denition assumes according to the above denition that there is an unlimited number of servers. The oered trac should be independent of the actual system. Lost or Rejected trac A : The dierence between oered trac and carried trac is equal to the rejected trac. The lost trac can be reduced by increasing the capacity of the system.

Example 1.7.1: Denition of trac If the call intensity is 5 calls per minute, and the mean service time is 3 minutes then the oered trac is equal to 15 erlang. The oered trac-volume during a working day of 8 hours is then 120 erlang-hours. 2

Example 1.7.2: Trac units Earlier other units of trac have been used. The most common which may still be seen are: SM = Speech-minutes 1 SM = 1/60 Eh.

22
CCS

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING


= Hundred call seconds: 1 CCS = 1/36 Eh. This unit is based on a mean holding time of 100 seconds and can still be found, e.g. in USA. EBHC = Equated busy hour calls: 1 EBHC = 1/30 Eh. This unit is based on a mean holding time of 120 seconds.

We will soon realize, that erlang is the natural unit for trac intensity because this unit is independent of the time unit chosen. 2

The oered trac is a theoretical parameter used in mathematical models and simulation models. However, the only measurable parameter in reality is the carried trac, which depends upon the actual system. Data transmission and multi-rate trac: In data transmissions systems we do not talk about service times but about transmission demands. A job can for example be a data packet of s units (e.g. bits or bytes). The capacity of the system , the data signalling speed, is measured in units per second (e.g. bits/second). The service time for such a job, i.e. transmission time, is s/ time units (e.g. seconds), i.e. dependent upon . If on the average jobs are served per time unit, then the utilization of the system is: = s . (1.3) 1, as it is the trac carried

The observed utilization will always be inside the interval 0 by one channel.

Usually we with split the total capacity up into units called Basic Bandwidth Units (BBU) or channels. We choose this unit so that all services require an integral number of bandwidth units, for example 64 kbps. If calls of type j simultaneously occupy dj channels, then the oered trac expressed in number of channels becomes:
N

Ac h =
j=0

j sj dj

[erlang-channels] ,

(1.4)

where N is number of trac types, and j and sj denotes the arrival rate, respectively the mean holding time of trac type j. The oered trac in number of connections for one service is Aj,c o = j sj [erlang-connections]. Usually the carried trac is measured in number of channels as it often is a mix of dierent connections with dierent bandwidth. Potential trac: In planning and demand models we use the term potential trac, which is equal the oered trac if there are no limitations on the use of the phone due to cost or availability (always a free telephone available).

1.8. TRAFFIC VARIATIONS AND THE CONCEPT BUSY HOUR

23

1.8

Trac variations and the concept busy hour

The teletrac have variations according to the activity in the society. The trac is generated by single sources, subscribers, who normally make telephone calls independently of each other. An investigation of the trac variations shows that it is partly of a stochastic nature, partly of a deterministic nature. Fig. 1.11 shows the variation in the number of calls on a Monday morning. By comparing several days we can recognize a deterministic curve with superposed stochastic variations.
160 Calls per minute
. . . . . . . . . . .. .. .. .. .. .. .. ... .. . .. . .. . .. . . . . .. . . . .. . . . . .. . .. . . . . . .. . . .. .. . . . .. . .. . . . . . . . ... . . . . . .. . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . .. . .. .. . . . .. .. . . . . . . . . . . .. . .. . . . . .. . . . . ... . .. .. .. .. . . . . . . . . . . . . .. .. .. .. . . . . .. . .. .. .. .. . .. ... . . ... . .. .. .. .. . . .. . . . . . . .. .. .. . . .... ... . . ... .. .. .. .. .. ... ... . . ... . .. .. .. .. . . .. .. ... . . ... . . . . . .. . .. . . . .. . .. .. ... . . ... . . .. . . .. ... .. ... . .. . . .. .. . ... . . . . . .. . .. .. ... .. ... . .. . . .. .. . ... . . . . .. .. .. .. .. ... .. . ... . .. . . . .. . . . ... . . . . .. . . .. . . . . . . . . . . . .. .. .. . .. . . . . . . . . .. . . . .. . . . .. . .. .. . . . . . . . . . . . . . .... . . . . . .. .. . .. ... . . .. .. . . . .. . .. ... . .. . .. .... ... . . . . .. . . . . .. . . . . .. . . . .. .. . .. . .. . ... . . . ... . .. .. . . . . . . . .. ... . .. . .. .... ... . . ... .... .. . . . . . .. . ... . . . . . .. . . . .. . .. . . .. . . . . . . . . . . . .. . . . . ... .. . .. . . .. . .. . . .. .... .. . . . .. . . .. . . . . . .. . ... ... ... . . . .. . . .. . .. . ... .. .. .. . . .. . .. . .... . .... .. . . . . . . . .. . . . .. . ... .. .. .. . . .. . . . . . . .. . .. .. . .. . .. . .. . . . . .... .. . . . . .. . . . . . . .. ......... . . . . . . .. . . . . ..... .. .. . .. .. . . .. . . . .. . . . .. .. . . . . .. ......... .. . . . . . . . ... . . . . . .. . . . . . . . . .. . . . . ... . . . . . . .. .. .. .. . . . .. . . . .. . .. . . . ... . .. ..... .. ... . . . . . . . . . . .. .. . . . .. .. .. . . . .. . . . . .. . .. . .. . . . . . . . . .. . . . .. .. .. . . . . . . . . . .. . .. . .. . . . . ... ... ... . . ... . .. ... . .. . .. .. .. . . . . . . . ... . . .. . ... . .. . .. . . .. . .. . . . . . . .. .. .. . ... ... ... . ... . . . ... .. .. . .. . . . . .. . ... . .. . .. .. . .. . .. . .. .. .. . . . . . ... . .. . . . .. ... . ... ... ... . . . . .. .. . . .... . .. . .. .. .. .. . .. . . . ... .. . . .. . .. .. . . . .. ... ... . . . . . . . . . . . . . . .. . .. . . . . . .. . . . . . .. . .. . . . .. ..... . . . . .. .. . . . . . .. . ... . . . .. . .. . .. .. ..... ... . .. .. ..... . . . . . . ... . . . . .. . . . . .. . . ... . .. .. . . .. . .. . . . . . .. . .. ... .. . . .. . .. . . . . . . .. . . . .... . .. . . . . . .. . . . . .. . . . . . .. . . .. .. . . . .. ... . ..... .. .. .... . . . . . . .. . . .. . ... . . .. . . ... ..... .... . .. . . . . .. . . . . .. . . . . . ... ... . .. .. . . . . . . .. .. . .. . . .. . . .. .. . . . .. .. ..... . ... . .. . . .. . .. .. .. . . .. . . . .. . . .. ... .. . . . . .. . .. .. . .. . . .. . . . . . .. .. . .... . . . . . . .. .... . . . .. . . .. . .. ...... . . . . . . .. .. . . . .. .. . . .. .. . .. . .. ... . . . . . . .. . . .. .... . . . .. . . . . . ...... . . . . . . . .. .. . . . . . .... . . . . .. .. . . . ... . . . . .. . . . .. . . . . . . .. . . . . . . .. . . .... . . . . . ... . . . .. .. . .. . .. . .. ... . ...... . . . . ... . .. .... . . .. ... . . . . . .. . . .. . .. . ... .. . .. .. . .. .. . . .. ... . . . . . ... . . .. . . . . . .. . . . . . .. . . .. . .. .... .. .. ... .. . . . . ... . .. . . . .. . . .. . ... . . .. . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . .. . . . . . .. . .. . . ... .... .. . . .. .. . . ... ... .. .. . .. . ... ..... ... . . ... .. . . .. ... . ... . .. .. .. .. . . .. . ... .. .. .. . .. . . . . . .. . ... ..... . ... .. . ... . .. . .... ..... . ... .. . ... . .. . ... .. .. .. . .. .. . . . . . . . .. . . . . .. . .... . . .. . . . . . ... . .. .. . . . . . .. . . . .. . . ... .. .. . . .. .. . .... . . . . .. . . .. .. . . .. . .. . .. .... .. . .. . . .. . .. . . .. . .. . .. . . . . .. .. .. . .. . . . . . .. . ... . .. ... . .. . .. .. ... . .. . . . .. . . . .. . .. . . .. . . .. .. . . . . . .. . . .... .. . . . . ... . . .... . . . ... . .. . . . . . . . . .. . . . .... . .. . . .. . . . .. . . . .... . . . .. . . .. . . .. .... . .. . . . .. . .. . . . . . . . . .. . . .. . . . . . . .. . . .. .. .... . .. . ... . . . . .. . . . . . . . . .. .. . . . . . . . ... .. .. ... .. .. .. ... . . . . . . .. . . . ... . . .... .. .. . . .. .. .. .. . . . . . . . . ... . . .. . . . . .. .. . . .. .. .... . . . .. . . ... . . . .. .. . ..... . . .. . . . ..... . . . .. . . . . .. . . . ..... . . ..... .. . .. . . . .. .. . . . .. ... ... . . . . .. .. . . . .. .. . . . .. . . . .. .. . . . .. .. . . .. . . . . .. . . .. .. . . .. . . . . .. .. .. . . . ... . . . .. .. .. . . . . .. . . ..... .. . .. . .. . . . .. . .. . . .... .. . . . .. .... .. . .. . . .... .. . . .. . .. .. . . . . .. .. . . . .. . .. . .. . . .. . . . . . . . . . . . . . . . . .

120

80

40

0 8 9 10 11 12 13 Time of day [hour]

Figure 1.11: Number of calls per minute to a switching center a Monday morning. The regular 24-hour variations are superposed by stochastic variations. (Iversen, 1973 [40]). During a 24 hours period the trac typically looks as shown in Fig. 1.12. The rst peak is caused by business subscribers at the beginning of the working hours in the morning, possibly calls postponed from the day before. Around 12 oclock it is lunch, and in the afternoon there is a certain activity again. Around 19 oclock there is a new peak caused by private calls and a possible reduction in rates after 19.30. The mutual size of the peaks depends among other thing upon whether the exchange is located in a typical residential area or in a business area. They also depend upon which type of trac we look at. If we consider the trac between Europe and USA, most calls takes place in the late afternoon because of the time dierence. The variations can further be split up into variation in call intensity and variation in service time. Fig. 1.13 shows variations in the mean service time for occupation times of trunk lines during 24 hours. During business hours it is constant, just below 3 minutes. In the evening

24 100 80 60 40 20 0 0

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING Calls per minute

12

16

20 24 Time of day [hour]

Figure 1.12: The mean number of calls per minute to a switching center taken as an average for periods of 15 minutes during 10 working days (Monday Friday). At the time of the measurements there were no reduced rates outside working hours (Iversen, 1973 [40]). it is more than 4 minutes and during the night very small, about one minute. Busy Hour: The highest trac does not occur at same time every day. We dene the concept time consistent busy hour, TCBH as those 60 minutes (xed with an accuracy of 15 minutes) which during a long period on the average has the highest trac. Some days it may therefore happen that the trac during the busiest hour is larger than the time consistent busy hour, but on the average during many days, the busy hour trac will be the largest. We also distinguish between busy hour for the total telecommunication system, an exchange, and for a single group of servers, e.g. a trunk group. Certain trunk groups may have a busy hour outside the busy hour for the exchange (for example trunk groups for calls to the USA). In practice, for measurements of trac, dimensioning, and other aspects it is an advantage to have a predetermined welldened busy hour. The deterministic variations in teletrac can be divided into: 24 hours variation (Fig. 1.12 and 1.13). Weekly variations (Fig. 1.14). Normally the highest trac is on Monday, then Friday, Tuesday, Wednesday and Thursday. Saturday and especially Sunday have a low trac level. A useful rule of thumb is that the 24 hour trac is equal to 8 times the busy

1.8. TRAFFIC VARIATIONS AND THE CONCEPT BUSY HOUR

25

hour trac (Fig. 1.14), i.e. only one third of capacity in the telephone system is utilized. This is the reason for reducing rates outside busy hours. Variation during a year. There is a high trac in the beginning of a month, after a festival season, and after quarterly period begins. If Easter is around the 1st of April then we observe a very high trac just after the holidays. The trac increases year by year due to the development of technology and economics in the society.

300 270 240 210 180 150 120 90 60 30 0

Mean holding time [s]

12

16

20 24 Time of day [hour]

Figure 1.13: Mean holding time for trunk lines as a function of time of day. (Iversen, 1973 [40]). The measurements exclude local calls. Above we have considered traditional voice trac. Other services and trac types have other patterns of variation. In Fig. 1.15 we show the variation in the number of calls per 15 minutes to a modem pool for dial-up Internet calls. The mean holding time as a function of the time of day is shown in Fig. 1.16. Cellular mobile telephony has a dierent prole with maximum late in the afternoon, and the mean holding time is shorter than for wire-line calls. By integrating various forms of trac in the same network we may therefore obtain a higher utilization of the resources.

26
.

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING


. . . . . . . . . . . . . . . . . . . . .

60000

Number of calls per 24 hours


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Number of calls per Busy Hour


. . . . . . . . . . . . . . . . . . . . . . .

7500

50000

. . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6250
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40000

. . . . .

5000

30000

. . . .

3750

20000
. . . . . . . .

. . . . . . . . .

2500

10000

. . . .

1250

Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat

Figure 1.14: Number of calls per 24 hours to a switching center (left scale). The number of calls during busy hour is shown for comparison at the right scale. We notice that the 24hour trac is approximately 8 times the busy hour trac. This factor is called the trac concentration (Iversen, 1973 [40]).

1.9

The blocking concept

The telephone system is not dimensioned so that all subscribers can be connected at the same time. Several subscribers are sharing the expensive equipment of the exchanges. The concentration takes place from the subscriber toward the exchange. The equipment which is separate for each subscriber should be made as cheap as possible. In general we expect that about 58 % of the subscribers should be able to make calls at the same time in busy hour (each phone is used 1016 % of the time). For international calls less than 1 % of the subscribers are making calls simultaneously. Thus we exploit statistical multiplexing advantages. Every subscriber should feel that he has unrestricted access to all resources of the telecommunication system even if he is sharing it with many others. The amount of equipment is limited for economical reasons and it is therefore possible that a subscriber cannot establish a call, but has to wait or is blocked (the subscriber for example experiences busy tone and has to repeat the call attempt). Both are inconvenient to the

1.9. THE BLOCKING CONCEPT


14000

27

12000

10000

8000 arrivals 6000 4000 2000

0 0 2 4 6 8 10 12 14 hour of day 16 18 20 22 24

Figure 1.15: Number of calls per 15 minutes to a modem pool of Tele Danmark Internet. Tuesday 1999.01.19.

subscriber. Depending on how the system operates we distinguish between loss systems (e.g. trunk groups) and waiting time systems (e.g. common control units and computer systems) or a combination of these if the number of waiting positions (buer) is limited. The inconvenience in losssystems due to insucient equipment can be expressed in three ways (network performance measures): Call congestion B: The fraction of all call attempts which observes all servers busy (the user-perceived quality-of-service, the nuisance the subscriber experiences). The fraction of time when all servers are busy. Time congestion can for example be measured at the exchange (= virtual congestion).

Time congestion E:

Trac congestion C: The fraction of oered trac which is not carried, possibly despite several attempts. These quantitative measures can for example be used to establish dimensioning principles for trunk groups.

28
1200

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

1000

800 service time (sec)

600

400

200

0 0 2 4 6 8 10 12 14 hour of day 16 18 20 22 24

Figure 1.16: Mean holding time in seconds as a function of time of day for calls arriving inside the period considered. Tele Denmark Internet. Tuesday 1999.01.19.

When congestion is small it is possible with a good approximation to handle congestion in the dierent part of the system as being mutually independent. The congestion for a certain route is then approximately equal to the sum of the congestion in each link of the route. During the busy hour we normally allow a congestion of a few percentage between two subscribers. The systems cannot manage every situation without inconvenience for the subscribers. The purpose of teletrac theory is to nd relations between quality of service and cost of equipment. The existing equipment should be able to work at maximum capacity during abnormal trac situations (e.g. a burst of phone calls), i.e. the equipment should keep working and make useful connections. The inconvenience in delaysystems (queueing systems) is measured as a waiting time. Not only the mean waiting time is of interest but also the distribution of the waiting time. It could be that a small delay do not mean any inconvenience, so there may not be a linear relation between inconvenience and waiting time. In telephone systems we often dene an upper limit for the acceptable waiting time. If this limit is exceeded, then a time-out of the connection will take place (enforced disconnection).

1.10. TRAFFIC GENERATION AND SUBSCRIBERS REACTION Outcome A-error: Blocking and technical errors: B no answer before A hangs up: B-busy: B-answer = conversation: No conversation: Icountry Dcountry 15 5 10 10 60 % % % % % 20 35 5 20 20 % % % % %

29

40 %

80 %

Table 1.1: Typical outcome of a large number of call attempts during Busy Hour for Industrialized countries, respectively Developing countries.

1.10

Trac generation and subscribers reaction

If Subscriber A want to speak to Subscriber B this will either result in a successful call or a failed callattempt. In the latter case A may repeat the call attempt later and thus initiate a series of several callattempts which fail. Call statistics typically looks as shown in Table 1.1, where we have grouped the errors into a few typical classes. We notice that the only error which can be directly inuenced by the operator is technical errors and blocking, and this class usually is small, a few percentages during the Busy Hour. Furthermore, we notice that the number of calls which experience Bbusy depends on the number of A-errors and technical errors & blocking. Therefore, the statistics in Table 1.1 are misleading. To

No answer A A-error Tech. errors and Blocking B-busy B-answer

pn pb pa

ps

Figure 1.17: When calculating the probabilities of events for a certain number of call attempts we have to consider the conditional probabilities.

obtain the relevant probabilities, which are shown in Fig. 1.17, we shall only consider the calls arriving at the considered stage when calculating probabilities. Applying the notation

30

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

I country pe = ps = pn = pb = pa =
15 100 5 85 10 80 10 80 60 80

D country = 15% pe = = 6% ps = = = 13% pn = = 13% pb = 75% pa =


20 100 35 80 5 45 20 45 20 45

= 20% = 44% = 11% = 44% = 44%

Table 1.2: The relevant probabilities for the individual outcomes of the call attempts calculated for Table 1.1 in Fig. 1.17 we nd the following probabilities for a call attempts (assuming independence): p{A-error} = pe p{Congestion & tech. errors} = (1 pe ) ps p{Bno answer} = (1 pe ) (1 ps ) pn p{Bbusy} = (1 pe ) (1 ps ) pb p{Banswer} = (1 pe ) (1 ps ) pa (1.5) (1.6) (1.7) (1.8) (1.9)

Using the numbers from Table 1.1 we nd the gures shown in Table 1.2. From this we notice that even if the A-subscriber behaves correctly and the telephone system is perfect, then only 75 %, respectively 45 % of the call attempts result in a conversation. We distinguish between the service time which includes the time from the instant a server is occupied until the server becomes idle again (e.g. both call set-up, duration of the conversation, and termination of the call), and conversation duration, which is the time period where A talks with B. Because of failed callattempts the mean service time is often less than the mean call duration if we include all callattempts. Fig. 1.18 shows an example with observed holding times.
Example 1.10.1: Mean holding times We assume that the mean holding time of calls which are interrupted before B-answer (A-error, congestion, technical errors) is 20 seconds and that the mean holding time for calls arriving at the called party (B-subscriber) (no answer, B-busy, B-answer) is 180 seconds. The mean holding time at the A-subscriber then becomes by using the gures in Table 1.1: I country: D country: ma = 20 20 + 100 55 ma = 20 + 100 80 180 = 148 seconds 100 45 180 = 92 seconds 100

1.10. TRAFFIC GENERATION AND SUBSCRIBERS REACTION

31

Number of observations 135164 observations = 142.86 = 3.83 Exponential Hyperexponential 10


3

10

10

10

10

15

20

25

30

35 40 Minutes

Figure 1.18: Frequency function of holding times of trunks in a local switching center.
We thus notice that the mean holding time increases from 148s, respectively 92s, at the A-subscriber to 180s at the B-subscriber. If one call intent implies more repeated call attempts (cf. Example 1.4), then the carried trac may become larger than the oered trac. 2

If we know the mean service time of the individual phases of a call attempt, then we can calculate the proportion of the call attempts which are lost during the individual phases. This can be exploited to analyse electro-mechanical systems by using SPC-systems to collect data. Each callattempt loads the controlling groups in the exchange (e.g. a computer or a control unit) with an almost constant load whereas the load of the network is proportional to the duration of the call. Because of this many failed callattempts are able to overload the control devices while free capacity is still available in the network. Repeated callattempts are not necessarily caused by errors in the telephone-system. They can also be caused by e.g. a busy Bsubscriber. This problem were treated for the rst time by Fr. Johannsen in Busy

32

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

published in 1908 (Johannsen, 1908 [60]) . Fig. 1.19 and Fig. 1.20 show some examples from measurements of subscriber behaviour. Studies of the subscribers response to for example busy tone is of vital importance for the dimensioning of telephone systems. In fact, humanfactors (= subscriberbehaviour) is a part of the teletrac theory which is of great interest. During Busy Hour = 10 16 % of the subscribers are busy using the line for incoming or outgoing calls. Therefore, we would expect that % of the call attempts experience Bbusy. This is, however, wrong, because the subscribers have dierent trac levels. Some subscribers receive no incoming call attempts, whereas others receive more than the average. In fact, it is so that the most busy subscribers on the average receive most call attempts. A-subscribers have an inclination to choose the most busy B-subscribers, and in practice we observe that the probability of B-busy is about 4 , if we take no measures. For residential subscribers it is dicult to improve the situation. But for large business subscribers having a PAX (= PABX ) (Private Automatic eXchange) with a group-number a sucient number of lines will eliminate B-busy. Therefore, in industrialized countries the total probability of B-busy becomes of the same order of size as (Table 1.1). For Dcountries the trac is more focused towards individual numbers and often the business subscribers dont benet from group numbering, and therefore we observe a high probability of B-busy (4050 %). At the Ordrup measurements approximately 4% of the call were repeated callattempts. If a subscriber experience blocking or Bbusy there is 70% probability that the call is repeated within an hour. See Table 1.3. Number of observations Attempt no. 1 2 3 4 5 >5 Total Success Continue Give up 56.935 3.252 925 293 139 134 61.678 75.389 7.512 2.378 951 476 248 10.942 1.882 502 182 89 114 13.711 p{success} 0.76 0.43 0.39 0.31 0.29 Persistence 0.41 0.56 0.66 0.72 0.74

Table 1.3: An observed sequence of repeated callattempts (national calls, Ordrup measurements). The probability of success decreases with the number of callattempts, while the persistence increases. Here a repeated callattempt is a call repeated to the same Bsubscriber within one hour. A classical example of the importance of the subscribers reaction was seen when Valby gasworks (in Copenhagen) exploded in the mid sixties. The subscribers in Copenhagen generated a lot of callattempts and occupied the controlling devices in the exchanges in the area of

1.10. TRAFFIC GENERATION AND SUBSCRIBERS REACTION

33

Copenhagen. Then subscribers from Esbjerg (western part of Denmark) phoning to Copenhagen had to wait because the dialled numbers could not be transferred to Copenhagen immediately. Therefore the equipment in Esbjerg was kept busy by waiting, and subscribers making local calls in Esbjerg could not complete the call attempts. This is an example of how a overload situation spreads like a chain reaction throughout the network. The more tight a network has been dimensioned, the more likely it is that a chain reaction will occur. An exchange should always be constructed so that it keeps working with full capacity during overload situations. In a modern exchange we have the possibility of giving priority to a group of subscribers in an emergency situation, e.g. doctors and police (preferential trac). In computer systems similar conditions will inuence the performance. For example, if it is dicult to get a free entry to a terminalsystem, the user will be disposed not to log o, but keep the terminal, i.e. increase the service time. If a system works as a waitingtime system, then the mean waiting time will increase with the third order of the mean service time (Chap. 10). Under these conditions the system will be saturated very fast, i.e. be overloaded. In countries with an overloaded telecommunication network (e.g. developing countries) a big percentage of the callattempts will be repeated callattempts.
10
5

10

n = 138543

10

10

10

10

30

60

90

120

150

180 Seconds

Figure 1.19: Histogram for the time interval from occupation of register (dial tone) to B answer for completed calls. The mean value is 13.60 s.

Example 1.10.2: Repeated call attempt This is an example of a simple model of repeated call attempts. Let us introduce the following

34
notation:

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING


b = persistence B = p{non-completion} (1.10) (1.11)

The persistence b is the probability that an unsuccessful call attempt is repeated, and p{completion} = (1 B) is the probability that the B-subscriber (called party) answers. For one call intent we get the following history: We get the following probabilities for one call intent:

Attempt No. 0 1 2 3 4 ... Total

p{B-answer} (1 B)

p{Continue} 1 Bb

p{Give up} B (1 b)

(1 B) (B b)2 (1 B) (B b)3 ... (1 B) (1 B b)

(1 B) (B b)

(B b)2 (B b)3 (B b)4 ... 1 (1 B b)

B (1 b) (B b)2 B (1 b) (B b)3 ... B (1 b) (1 B b)

B (1 b) (B b)

Table 1.4: A single call intent results in a series of call attempts. The distribution of the number of attempts is geometrically distributed.
p{completion} = (1 B) (1 B b) B (1 b) (1 B b) 1 (1 B b) (1.12)

p{non-completion} = No. of call attempts per call intent = Let us assume the following mean holding times: sc = mean holding time of completed calls sn = 0 = mean holding time of non-completed calls

(1.13) (1.14)

Then we get the following relations between the trac carried Y and the trac oered A: Y =A A=Y 1B 1Bb (1.15) (1.16) 2

1Bb 1B This is similar to the result given in ITUT Rec. E.502.

1.11. INTRODUCTION TO GRADE-OF-SERVICE = GOS

35

In practice, the persistence b and the probability of completion 1 B will depend on the number of times the call has been repeated (cf. Table 1.3). If the unsuccessful calls have a positive mean holding time, then the carried trac may become larger than the oered trac.
600

500

n = 7653

400

300

200

100

60

120

180

240

300 Seconds

Figure 1.20: Histogram for all call attempts repeated within 5 minutes, when the called party is busy.

1.11

Introduction to Grade-of-Service = GoS

The following section is based on (Veir, 2001 [115]). A network operator must decide what services the network should deliver to the end user and the level of service quality that the user should experience. This is true for any telecommunications network, whether it is circuitor packet-switched, wired or wireless, optical or copper-based, and it is independent of the transmission technology applied. Further decisions to be made may include the type and layout of the network infrastructure for supporting the services, and the choice of techniques to be used for handling the information transport. These further decisions may be dierent, depending on whether the operator is already present in the market, or is starting service from a greeneld situation (i.e. a situation where there is no legacy network in place to consider). As for the Quality of Service (QoS) concept, it is dened in the ITU-T Recommendation E.800 as: The collective eect of service performance, which determine the degree of satisfaction of a user of the service. The QoS consists of a set of parameters that pertain to the trac

36

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

performance of the network, but in addition to this, the QoS also includes a lot of other concepts. They can be summarized as: service support performance service operability performance serveability performance and service security performance The detailed denitions of these terms are given in the E.800 recommendation. The better service quality an operator chooses to oer to the end user, the better is the chance to win customers and to keep current customers. But a better service quality also means that the network will become more expensive to install and this, normally, also has a bearing to the price of the service. The choice of a particular service quality therefore depends on political decisions by the operator and will not be treated further here. When the quality decision is in place the planning of the network proper can start. This includes the decision of a transport network technology and its topology as well as reliability aspects in case one or more network elements become malfunctioning. It is also at this stage where the routing strategy has to be determined. This is the point in time where it is needed to consider the Grade of Service (GoS). This is dened in the ITU-T Recommendation E.600 as: A number of trac engineering variables to provide a measure of adequacy of a group of resources under specied conditions. These grade of service variables may be probability of loss, dial tone delay, etc. To this denition the recommendation furthermore supplies the following notes: The parameter values assigned for grade of service variables are called grade of service standards. The values of grade of service parameters achieved under actual conditions are called grade of service results. The key point to solve in the determination of the GoS standards is to apportion individual values to each network element in such a way that the target end-to-end QoS is obtained.

1.11.1

Comparison of GoS and QoS

It is not an easy task to nd the GoS standards needed to support a certain QoS. This is due to the fact that the GoS and QoS concepts have dierent viewpoints. While the QoS views the situation from the customers point of view, the GoS takes the network point of view. We illustrate this by the following example:

1.11. INTRODUCTION TO GRADE-OF-SERVICE = GOS

37

Example 1.11.1: Say we want to x the end to end call blocking probability at 1 % in a telephone network. A customer will interpret this quantity to mean that he will be able to reach his destinations in 99 out of 100 cases on the average. Fixing this design target, the operator apportioned a certain blocking probability to each of the network elements, which a reference call could meet. In order to make sure that the target is met, the network has to be monitored. But this monitoring normally takes place all over the network and it can only be ensured that the network on the average can meet the target values. If we consider a particular access line its GoS target may well be exceeded, but the average for all access lines does indeed meet the target. 2

GoS pertains to parameters that can be veried through network performance (the ability of a network or network portion to provide the functions related to communications between users) and the parameters hold only on average for the network. Even if we restrain ourselves only to consider the part of the QoS that is trac related, the example illustrates, that even if the GoS target is fullled this need not be the case for the QoS.

1.11.2

Special features of QoS

Due to the dierent views taken by GoS and QoS a solution to take care of the problem has been proposed. This solution is called a service level agreement (SLA). This is really a contract between a user and a network operator. In this contract it is dened what the parameters in question really mean. It is supposed to be done in such a way, that it will be understood in the same manner by the customer and the network operator. Furthermore the SLA denes, what is to happen in case the terms of the contract are violated. Some operators have chosen to issue an SLA for all customer relationships they have (at least in principle), while others only do it for big customers, who know what the terms in the SLA really mean.

1.11.3

Network performance

As mentioned above the network performance concerns the ability of a network or network portion to provide the functions related to communications between users. In order to establish how a certain network performs, it is necessary to perform measurements and the measurements have to cover all the aspects of the performance parameters (i.e. tracability, dependability, transmission and charging). Furthermore, the network performance aspects in the GoS concept pertains only to the factors related to tracability performance in the QoS terminology. But in the QoS world network performance also includes the following concepts: dependability,

38

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING transmission performance, and charging correctness.

It is not enough just to perform the measurements. It is also necessary to have an organization that can do the proper surveillance and can take appropriate action when problems arise. As the network complexity keeps growing so does the number of parameters needed to consider. This means that automated tools will be required in order to make it easier to get an overview of the most important parameters to consider.

1.11.4

Reference congurations

In order to obtain an overview of the network under consideration, it is often useful to produce a so-called reference conguration. This consists of one or more simplied drawing(s) of the path a call (or connection) can take in the network including appropriate reference points, where the interfaces between entities are dened. In some cases the reference points dene an interface between two operators, and it is therefore important to watch carefully what happens at this point. From a GoS perspective the importance of the reference conguration is the partitioning of the GoS as described below. Consider a telephone network with terminals, subscriber switches and transit switches. In the example we ignore the signalling network. Suppose the call can be routed in one of three ways: 1. terminal subscriber switch terminal This is drawn as a reference conguration shown in Fig. 1.21.

S
Ref point A Ref point A

Figure 1.21: Reference conguration for case 1.

2. terminal subscriber switch transit switch subscriber switch terminal This is drawn as a reference conguration shown in Fig. 1.22.

3. terminalsubscriber switchtransit switchtransit switchsubscriber switchterminal This is drawn as a reference conguration shown in Fig. 1.23.

1.11. INTRODUCTION TO GRADE-OF-SERVICE = GOS

39

S
Ref point A Ref point B

T
Ref point B

S
Ref point A

Figure 1.22: Reference conguration for case 2.

S
Ref point A Ref point B

T
Ref point C

T
Ref point B

S
Ref point A

Figure 1.23: Reference conguration for case 3. Based on a given set of QoS requirements, a set of GoS parameters are selected and dened on an end-to-end basis within the network boundary, for each major service category provided by a network. The selected GoS parameters are specied in such a way that the GoS can be derived at well-dened reference points, i.e. trac signicant points. This is to allow the partitioning of end-to-end GoS objectives to obtain the GoS objectives for each network stage or component, on the basis of some well-dened reference connections. As dened in ITU-TRecommendation E.600, for trac engineering purposes, a connection is an association of resources providing means for communication between two or more devices in, or attached to, a telecommunication network. There can be dierent types of connections as the number and types of resources in a connection may vary. Therefore, the concept of a reference connection is used to identify representative cases of the dierent types of connections without involving the specics of their actual realizations by dierent physical means. Typically, dierent network segments are involved in the path of a connection. For example, a connection may be local, national, or international. The purposes of reference connections are for clarifying and specifying trac performance issues at various interfaces between different network domains. Each domain may consist of one or more service provider networks. Recommendation I.380/Y.1540 denes performance parameters for IP packet transfer; its companion Draft Recommendation Y.1541 species the corresponding allocations and performance objectives. Recommendation E.651 species reference connections for IP-access networks. Other reference connections are to be specied. From the QoS objectives, a set of end-to-end GoS parameters and their objectives for dierent reference connections are derived. For example, end-to-end connection blocking probability and end-to-end packet transfer delay may be relevant GoS parameters. The GoS objectives should be specied with reference to trac load conditions, such as under normal and high load conditions. The end-to-end GoS objectives are then apportioned to individual resource components of the reference connections for dimensioning purposes. In an operational net-

40

CHAPTER 1. INTRODUCTION TO TELETRAFFIC ENGINEERING

work, to ensure that the GoS objectives have been met, performance measurements and performance monitoring are required. In IP-based networks, performance allocation is usually done on a cloud, i.e. the set of routers and links under a single (or collaborative) jurisdictional responsibility, such as an Internet Service Provider, ISP. A cloud is connected to another cloud by a link, i.e. a gateway router in one cloud is connected via a link to a gateway router in another cloud. End-to-end communication between hosts is conducted on a path consisting of a sequence of clouds and interconnecting links. Such a sequence is referred to as a hypothetical reference path for performance allocation purposes.

Chapter 2 Time interval modeling


Time intervals are non-negative, and therefore they can be expressed by non-negative random variables. Time intervals of interests are for example service times, duration of congestion (blocking periods, busy periods), waiting times, holding times, CPU-busy times, inter-arrival times, etc. We denote these time durations as life-times and their distribution functions as time distributions. In this chapter we review the basic theory of probability and statistics relevant to teletrac theory and illustrate the theory by the (negative) exponential distribution and generalizations of this. In principle, we may use any distribution function with nonnegative values to model a life time. However, the exponential distribution has some unique characteristics which make this distribution qualied for both analytical and practical uses. The exponential distribution plays a key role among all lifetime distributions. The most fundamental characteristic of the exponential distribution is the Markov property which means lack of memory or lack of age. The future is independent of the past. We can combine life-times by combing them into series (Sec. 2.3.1), into parallel (Sec. 2.3.2), or into a combination of the two (Sec. 2.3.3). In this way we get more parameters available for tting the distribution to real observations. A hypo-exponential or steep distribution corresponds to a set of stochastic independent exponential distributions in series (Fig. 2.4), and a hyper-exponential or at distribution corresponds to exponential distributions in parallel (Fig. 2.6). This structure corresponds naturally to the shaping of trac processes in telecommunication and data networks. By combination of steep and at distribution we get Cox-distributions, which can be approximated to any distribution with any degree of accuracy. By using a graphical approach, phase-diagrams, we are able to derive decomposition properties of importance for later applications. We also mention a few other time distributions which are employed in teletrac theory (Sec. 2.4), and nally we review some observations of real life-times in Sec. 2.5.

42

CHAPTER 2. TIME INTERVAL MODELING

2.1

Distribution functions

A time interval can be described by a random variable T . This is characterized by a cumulative distribution function (cdf ) F (t), which is the probability that the duration of a time interval is less than or equal to t: F (t) = p(T t) . In general, we assume that the derivative of F (t), the probability density function (pdf ) f (t), exists: dF (t) = f (t) dt = p{t < T t + dt} , t 0. (2.1) As we only consider non-negative time intervals we have: t < 0, 0, F (t) = t t dF (u) = f (t) dt , 0 t < ,
0 0

(2.2)

In (2.2) we integrate from 0 to keep record of a possible discontinuity in t = 0. When we consider waiting time systems, there is often a positive probability to have waiting times equal to zero, i.e. F (0) = 0. On the other hand, when we look at the inter-arrival times, we usually assume F (0) = 0 (Sec. 3.2.3). The probability density function is also called the frequency function. Sometimes it is easier to consider the complementary distribution function, also called the survival distribution function: F c (t) = 1 F (t) . Analytically, many calculations can be carried out for any time distribution.

2.1.1

Exponential distribution

This is the most fundamental distribution i teletrac theory, where it is called the negative exponential distribution. This distribution is characterized by a single parameter, the intensity or rate : F (t) = 1 et , f (t) = et , > 0, > 0, t 0, t 0. (2.3) (2.4)

The phase diagram of the exponential distribution is shown in Fig. 2.1. The density function is shown in Fig. 2.5 for k = 1.

2.2. CHARACTERISTICS OF DISTRIBUTIONS


..... .. ...................... .................... ..

43

..... .. ...................... .................... ..

Figure 2.1: Phase diagrams of an exponentially distributed time interval is shown as a box with the intensity . The box thus means that a customer arriving to the box is delayed an exponentially distributed time interval before leaving the box.

2.2

Characteristics of distributions

Times intervals are always non-negative and therefore their distribution functions have some useful properties.

2.2.1

Moments

The ith non-central moment, which usually is called the ith moment, is dened by: E{T i } = mi =
0

ti f (t) dt ,

i = 1, 2, . . . . .

(2.5)

So far we assume that all moments exist. In general we always assume that at least the mean value exists. A distribution is uniquely dened by its moments. For life-time distributions we have the following relation, called Palms identity: mi =
0

t f (t) dt =

i ti1 {1 F (t)} dt ,

i = 1, 2, . . . . .

(2.6)

It was rst proved by (Palm, 1943 [92]) as follows:


t=0

it

i1

{1 F (t)} dt = =

t=0 t=0 t=0 x=0 x=0

it

i1

x=t

f (x) dx

dt

x=t x=t x

i ti1 f (x) dx dt dti f (x) dx dti f (x) dx

t=0

xi f (x) dx

= mi .

44

CHAPTER 2. TIME INTERVAL MODELING

The order of integration can be inverted because the integrand is non-negative. Thus we have proved (2.5). In particular we nd the rst two moments: m1 =
0 0

t f (t) dt = t2 f (t) dt =

0 0

{1 F (t)}dt = E{T } , 2t {1 F (t)} dt .

(2.7)

m2 =

(2.8)

The ith central moment is dened as: E{(T m1 )i } =


0

(t m1 )i f (t) dt .

(2.9)

In advanced teletrac we also use cumulants, Binomial moments, and factorial moments. They are uniquely related to the above moments, but has some advantages when dealing with special problems. For characterizing random variables we use the following parameters related to the rst two moments: Mean value or expected value = E{T }. This is the rst moment: m1 = E{T } . Variance. This is the 2nd central moment: 2 = E{(T m1 )2 } . It is easy to show that: 2 = m2 m2 1 m2 = 2 + m2 . 1 Standard deviation. This is the square root of the variance and thus equal to . Coecient of variation is a normalized measure for the irregularity (dispersion) of a distribution. It is dened as the ratio between the standard deviation and the mean value: CV = Coecient of Variation = . (2.12) m1 This quantity is dimensionless, and later we use it to characterize discrete distributions (state probabilities). or (2.11) (2.10)

2.2. CHARACTERISTICS OF DISTRIBUTIONS Palms form factor is another measure of irregularity which is dened as follows: m2 = 2 =1+ m1 m1
2

45

1.

(2.13)

The form factor as well as (/m1 = CV ) are independent of the choice of time scale, and they will appear in many formul in the following. The larger a form factor, the more irregular is the time distribution, The form factor has its minimum value equal to one for constant time intervals ( = 0). It is used to charactize continuous distributions, for exampe time intervals. Median. Sometimes we also use the median to characterize a distribution. The median is the value of t for which F (t) = 0.5. Thus half the observations will be smaller than the median and half will be bigger. For a symmetric probability density function the mean equals the median. For the exponential distribution the median is 0.6931 times the mean value. Percentiles. More generally, we characterize a distribution by percentiles (quantiles or fractiles): If P (T tp ) = pt , then tp is the pt 100 % percentile. The median is the 50% percentile. When estimating parameters of a distribution from observations, we are usually satised by knowing the rst two moments (m1 and ) as higher order moments require extremely many observations to obtain reliable estimates. Time distributions can also be characterized in other ways, for example by properties related to the trac. We consider some important characteristics in the following sections.

Example 2.2.1: Exponential distribution The following integral is very useful: t e t dt = e t ( t + 1) 2 (2.14)

For the exponential distribution (Sec. 2.1.1) we nd: m1 =


0

t et dt = t2 et dt 2 t et dt =

1 ,

m2 =
0

t=0

2 . 2

46

CHAPTER 2. TIME INTERVAL MODELING

where the last equation is obtained using (2.6). The gamma function is dened by: (n + 1) =
0

tn et dt = n!

(2.15)

If we replace t by t, then we get the ith moment (2.6) of the exponential distribution: ith moment: Mean value: mi = i! , i 1 , 2 , 2 1 , 2 2,

m1 =

Second moment: m2 = Variance: Form factor: 2 = =

(2.16)

Example 2.2.2: Constant time interval For a constant time interval of duration h we have: mi = hi .

2.2.2

Residual life-time

If an event b has occurred, i.e. p(b > 0), then the probability that the event a also occurs is given by the conditional probability. This is denoted by p(a | b), the conditional probability of a, given b. Denoting the joint probability that both a and b take place by p(a b) we have: p(a b) = p(b) p(a | b) = p(a) p(b | a) . Under the assumption that p(b) > 0, we thus have: p(a | b) = p(a b) . p(b) (2.17)

For time distributions we are interested in F (x + t | x), the distribution of the residual life time t, given that a certain age x 0 has already been obtained. The random variable of the total life time is T . Assuming p{T > x} > 0 and t 0 we get: p{T > x + t | T > x} = = p{(T > x + t) (T > x)} p{T > x} p{T > x + t} p{T > x} 1 F (x + t) , 1 F (x)

2.2. CHARACTERISTICS OF DISTRIBUTIONS and thus: F (x + t | x) = p{T x + t | T > x} = F (x + t) F (x) , 1 F (x) f (x + t) , 1 F (x) t 0, x 0.

47

(2.18)

f (t + x | x) =

(2.19)

Fig. 2.2 illustrates these calculations graphically. The mean value m1,r of the residual life-time can be written as (2.7): 1 m1,r (x) = 1 F (x)
t=0

{1 F (x + t)} dt ,

x 0.

(2.20)

The death rate at time x, i.e. the probability, that the considered life-time terminates within an interval (x, x + dx), under the condition that age x has been achieved, is obtained from (2.18) by letting t = dx: (x) dx = = F (x + dx) F (x) 1 F (x) dF (x) 1 F (x) f (x) dx . 1 F (x) (2.21)

The conditional density function (x) is also called the hazard function. If this function is given, then F (x) may be obtained as the solution to the following dierential equation: dF (x) + (x) F (x) = (x) . dx Assuming F (0) = 0 we get the solution:
t

(2.22)

F (t) = 1 exp f (t) = (t) exp

(u) du
0 t

(2.23)

(u) du
0

(2.24)

The death rate (t) is constant if and only if the life-time is exponentially distributed. This is a fundamental characteristic of the exponential distribution which is called the Markovian property (lack of memory or age). The probability of terminating at time t is independent of the actual age t.

48
. . . . .. . .. . . . . . . . . . . . . .

CHAPTER 2. TIME INTERVAL MODELING f(t)

0.20 0.15 0.10 0.05 0

................... ................... .... .... ..... .... ... . . ... . ... ... ... ... ... ... . ... ... ... . ... . . ... ... . .. .. ... ... .. .. ... . ... . . .. ... . .. ... . ... .. ... .. ... ... . .. . .. ... ... . .. . .. ... ... . ... .. ... . . . .. ... ... . .. . ... ... .. .. ... ... . . .. . ... ... . . . ... .. ... ... . ... .. . ... . ... . . .. . ... ... ... . ... .. ... .... . . .... . .. .... . . . .... .. .... .... .... . .. . ..... ..... . . ..... .. ...... . .. ....... ....... . .. ......... ........... . . ................. .......................... .. . ......................................... . . ............................ . . . . ..

10

12

14

0.25 0.20 0.15 0.10 0.05 0

. . . . .. . ... . . . . .......... ........... .... .. .. ..... .... ... ... ... ... ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... .... ... .... .... .... .... ..... ..... ..... ...... ....... ....... ......... .......... ................... ......................... ......................................................................... ................................................................ . . ..

f(t+3| 3)

10

12

14

Figure 2.2: The density function of the residual life time conditioned by a given age x (2.19). The example is based on a Weibull distribution We(2,5) (2.98), where x = 3 and F (3) = 0.3023.

One would expect that the mean residual life-time m1,r (x) decreases for increasing x, so that the expected residual life-time decreases when the age x increases. often it is not so. For an exponential distribution with form factor = 2 we have m1,r = m1 . For steep distributions (1 < 2) we have m1,r < m1 (Sec. 2.3.1), whereas for at distributions (2 < < ), we have m1,r > m1 (Sec. 2.3.2).

Example 2.2.3: Exponential distribution We assume duration of telephone calls is exponentially distributed. The distribution of the residual time is then independent of the actual duration of the conversation, and it is equal to the distribution

2.2. CHARACTERISTICS OF DISTRIBUTIONS


of the total life-time (2.19): f (t + x | x) = e(t+x) = et ex

49

= f (t) . If we remove the probability mass of the interval (0, x) from the density function and normalize the residual mass in (x, ) to unity, then the new density function becomes congruent with the original density function. The only continuous distribution function having this property is the exponential distribution, whereas the geometric distribution is the only discrete distribution having this property. Therefore, the mean value of the residual life-time is m1,r = m1 , and the probability of observing a lifetime in the interval (t, t + dt), given that it occurs after t, is given by (2.21) p{t < X t + dt | X > t} = f (t) dt 1 F (t) (2.25)

= dt .

Thus it depends only upon and dt, but it is independent of the actual age t. An example where this property is not valid is shown in Fig. 2.2 for the Weibull distribution (2.98) when k = 1. For k = 1 the Weibull distribution becomes identical with the exponential distribution. 2

Example 2.2.4: Waiting-time distribution Let us consider a queueing system with innite queue where no customers are blocked. The waiting time distribution Ws (t) for a random customer usually has a positive probability mass (atom) at t = 0, because some of the customers are served immediately without any delay. We thus have Ws (0) > 0. The waiting time distribution W+ (t) for customers having positive waiting times then becomes (2.18): Ws (t) Ws (0) W (t | t > 0) = W+ (t) = , 1 Ws (0) or if we denote the probability of a positive waiting time by D = 1 Ws (0) (probability of delay): D {1 W+ (t)} = 1 Ws (t) . For the probability density function (pdf) we have (2.19): D w+ (t) = ws (t) . For mean values we get: Dw =W , (2.28) where the mean waiting time for all customers is denoted by W , and the mean waiting time for the delayed customers is denoted by w. These formul are valid for any queueing system with innite queue. 2 (2.27) (2.26)

50

CHAPTER 2. TIME INTERVAL MODELING

2.2.3

Load from holding times of duration less than x

So far we have attached the same importance to all life-times independently of their duration. The importance of a life-time is often proportional to its duration, for example when we consider the load of queueing system, charging of CPU-times, telephone conversations etc. If we to a life time allocate a weight factor proportional to its duration, then the average weight of all time intervals is equal to the mean value: m1 =
0

t f (t) dt ,

(2.29)

where f (t) dt is the probability of an observation within the interval (t, t + dt), and t is the weight of this observation. We are interested in calculating the proportion of the mean value which is due to contributions from life-times of duration less than x:
x

t f (t) dt m1

(2.30)

Often relatively few service times make up a relatively large proportion of the total load. From Fig. 2.3 we see that if the form factor is 5, then 75% of the service times only contribute with 30% of the total load (Vilfred Paretos rule). This fact can be utilized to give priority to short tasks without delaying long tasks very much (Chap. 10).
Example 2.2.5: Exponential distribution For the exponentially distributed jobs with mean value m1 = 1/ we nd the relative load from jobs of duration t x from (2.30), using (2.14):
x

= =
0 x

t f (t) dt

t et dt (2.31) 2

= 1 ex (x + 1) . This result is used later when we look at shortest-job-rst queueing discipline (Sec. 10.6.4).

2.2.4

Forward recurrence time

The residual life-time from a random point of time is called the forward recurrence time. In this section we shall derive some formul of importance for applications. To formulate the

2.2. CHARACTERISTICS OF DISTRIBUTIONS 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Relative load
. . . . .. .. .. . .. .. . .. .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . .. . . . . .. . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . .. . .. . . . .. .. . . .. .. . .. .. .. . . .. .. . . .. . .. . . .. . .. . . .. . .. . .. .. .. . . .. .. .. . .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. ... .. .. .. .. .. .. ... .. ... .. ... ... ... ... . . ... ... ... ... .. ... ... ... .. ... ... ... ... ... ... ... ... ... ... . . .. ... ... ... ... ... ... ... ... ... ... .. . .... ... ... ... ... ... .... .... ... .... .. .. .. ... ... .... .... .... .... .... .... .... .... .... .... .... .... ..... .... .... .... ..... .... ..... .... .... .... ..... ..... .. .. ..... ..... ...... ..... ...... ..... ..... ...... ...... ..... .. ... ....... ...... ....... ...... .............. ....... .............. ... .... ..... ....... ........ .. ..... ........ .......... ......... ........... ...................... ........... ....... .. ..................... .. ..................... ...... .................................... ...... .............................. ..... . ....

51

=2

=5

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Percentile/100 Figure 2.3: Example of the relative trac load from holding times shorter than a given value given by the percentile of the holding time distribution (2.30). Here = 2 corresponds to an exponential distribution and = 5 corresponds to a Pareto-distribution. We note that the 10% largest holding times contributes with 33%, respectively 47%, of the load (cf. customer averages and time averages in Chap. 3). problem we consider an example. We wish to investigate the life-time distribution of cars and ask car-owners chosen at random about the age of their car. As the point of time is chosen at random the probability of choosing a certain car is proportional to the total life-time of that car. The distribution of the remaining residual life-time will be identical with the already achieved life-time. By sampling in this way, the probability of choosing a car is proportional with the life-time of this car, i.e. we will preferably choose cars with longer life-times (length-biased sampling). The probability of choosing a car having a total life-time x is given by (cf. the derivation of (2.30)): x f (x) dx . m1 As we consider a random point of time, the distribution of the remaining life-time will be

52 uniformly distributed in (0, x ]: f (t | x) =

CHAPTER 2. TIME INTERVAL MODELING

1 , x

0 < t x.

The probability density function (pdf) of the remaining life-time at a random point of time becomes: v(t) =
t

1 x f (x) dx , x m1 (2.32)

v(t) =

1 F (t) . m1

where F (t) is the distribution function of the total life-time and m1 is the mean value. By applying the identity (2.5), we note that the ith moment of v(t) is given by the (i + 1)th moment of f (t): mi,v =
0 0

ti v(t) dt ti 1 F (t) dt m1
0

= mi,v =

1 1 i + 1 m1

(i + 1) ti {1 F (t)} dt , (2.33)

1 1 mi+1,f . i + 1 m1

In particular, we obtain the mean value: m1,v = m1 , 2 (2.34)

where m1 is the mean value and the form factor of the life-time distribution considered. These formul are also valid for discrete time distributions.

Example 2.2.6: Exponential distribution For the exponential distribution we get (??): mi,v = 1 1 (i+1)! i! i+1 = i = mi . i + 1 m1

In particular, we have m1,v = m1 . The mean remaining life-time from a random point of view is equal to the mean value of the life-time distribution, because the exponential distribution is without memory. Furthermore, the mean value of the actual life-time is also m1 as we choose a random point of time. Thus the mean value of the total life-time becomes 2 m1 . 2

2.2. CHARACTERISTICS OF DISTRIBUTIONS

53

2.2.5

Distribution of the jth largest of k random variables

Let us assume that k random variables {T1 , T2 , . . . , Tk } are independent and identically distributed with distribution function F (t). The distribution of the jth largest variable will be given by:
j1

p{jth largest t} =

i=0

k i
k

{1 F (t)}i F (t)ki k i {1 F (t)}i F (t)ki .

(2.35)

= 1

i=j

as at most j 1 variables may be larger than t (but they may eventually all be less than t). The right-hand side is obtained using the Binomial theorem:
n

(a + b)n =
i=0

n i ni a b . i

(2.36)

The smallest one (or kth largest, j = k) has the distribution function: Fmin (t) = 1 {1 F (t)}k , and the largest one (j = 1) has the distribution function: Fmax (t) = F (t)k . (2.38) (2.37)

If the random variables has individual distribution functions Fi (t), we get an expression more complex than (2.35). For the smallest and the largest we get:
k

Fmin (t) = 1
k

i=1

{1 Fi (t)} ,

(2.39)

Fmax (t) =
i=1

Fi (t) .

(2.40)

Example 2.2.7: Minimum of N exponentially distributed random variables We assume that two random variables T1 and T2 are mutually independent and exponentially distributed with intensities 1 and 2 , respectively. A new random variable T is dened as: T = min {T1 , T2 } . The distribution function of T is (2.37): p{T t} = 1 e(1 +2 )t . (2.41)

54

CHAPTER 2. TIME INTERVAL MODELING

Thus this distribution function is also an exponential distribution with intensity (1 + 2 ). Under the assumption that the rst (smallest) event happens within the time interval (t, t + dt), then the probability that the random variable T1 is realized rst (i.e. takes places in this interval and the other takes place later) is given by: p{T1 < T2 | t} = = P {t < T1 t + dt} P {T2 > t} P {t < T t + dt} 1 e1 t dt e2 t (1 + 2 ) e(1 +2 )t dt 1 , 1 + 2 (2.42)

i.e. independent of t. These results can easily be generalized to N variables and make up the basic principle of the simulation technique called the roulette method, a Monte Carlo simulation methodology. 2

2.3

Combination of random variables

Combining exponential distributed time intervals in series, we get a class of distributions called Erlang distributions (Sec. 2.3.1). Combining them in parallel, we obtain hyper exponential distribution (Sec. 2.3.2). Combining exponential distributions both in series and in parallel, possibly with feedback, we obtain phase-type distributions, which is a very general class of distributions. One important subclass of phase-type distributions is Coxdistributions (Sec. 2.3.3). We note that an arbitrary distribution can be expressed by a Cox-distribution which can be used in analytical models in a relatively simple way.

2.3.1

Random variables in series

A linking in series of k independent time intervals corresponds to addition of k independent random variables, i.e. convolution of the random variables.
2 If we denote the mean value and the variance of the ith time interval by m1,i , i , respectively, then the sum of the random variables has the following mean value and variance: k

m1 =
i=1 k

m1,i ,

(2.43)

=
i=1

2 i .

(2.44)

2.3. COMBINATION OF RANDOM VARIABLES

55

In general, we should add the so-called cumulants, and the rst three cumulants are identical with the rst three central moments. The density function f (t) of the sum is obtained by the convolution: f (t) = f1 (t) f2 (t) fk (t) , where is the convolution operator: f12 (t) = f1 (t) f2 (t)
t

=
0

f1 (x) f2 (tx) dx

(2.45)

Example 2.3.1: Non-homogenous Erlang-2 distribution We consider two exponentially distributed independent time intervals T1 and T2 with intensities 1 , respectively 2 = 1 . The sum T12 = T1 + T2 is a random variable and the probability density function is obtained by convolution: p(t < T12 t + dt) = f12 (t) = =
0 t t 0

f1 (x) f2 (tx) dx

1 e1 x 2 e2 (tx) dx
t 0 t 0

= 1 2 e2 t = = p(t < T12 t + dt) =

e(1 2 )x dx

1 2 e2 t 1 2

(1 2 ) e(1 2 )x dx

1 2 e2 t 1 e(1 2 )t 1 2 1 2 1 2 e2 t e1 t , 1 2 1 2 1 = 2 . 2

For the case 1 = 2 we get an Erlang-2 distribution considered in the following.

Hypo-exponential or steep distributions Steep distributions are also called hypoexponential distributions or generalized Erlang distributions. They have a form factor within the interval 1 < 2. This distribution is obtained by convolving k exponential distributions (Fig. 2.4).

56

0 j=k

CHAPTER 2. TIME INTERVAL MODELING



t

Figure 2.4: By combining k exponential distributions in series we get a steep distribution with formfactor 2). If all k distributions are identical (i = ), then we get an Erlangk distribution. Erlang-k distributions We consider the case where all k exponential distributions are identical. The distribution obtained fk (t) is called the Erlang-k distribution, as it was widely used by A.K. Erlang. For k = 1 we of course get the exponential distribution. The distribution fk (t), k > 0, is obtained by convolving fk1 (t) and f1 (t). If we assume that the expression (2.46) is valid for fk1 (t), then we have by convolution:
t

fk (t) =
0 t

(t)k1 fk (t) = et , (k1)!

As the expression is valid for k = 1, we have by induction shown that it is valid for any k. The Erlang-k distribution is, from a statistical point of view, a special gamma-distribution. The cdf (cumulative distribution function) is obtained by repeated partial integration or as shown in a simple way later (3.21): Fk (t) = (t)j t e =1 j!
k1

The following moments can be found by using (2.43) and (2.44): m1 = 2 = k , k , 2 2 1 =1+ , m2 k (2.48) (2.49) (2.50)

fk1 (tx) f1 (x) dx {(t x)}k2 (tx) x e e dx (k 2)! (t x)k2 dx > 0, t > 0, k = 1, 2, . . . . (2.46)

=
0

k = et (k 2)!

j=0

(t)j t e . j!

(2.47)

= 1+

2.3. COMBINATION OF RANDOM VARIABLES

57

f (t)
. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . .. . .... ....... .. .............. . . . .. . .. . ... ... .. . ... . ... . . . .. ... . .. ... . . .. .. . . .. .. .. .. . . .. . . .. .. .. . .. . .. .. . . .. .. . . .. .. . ... . .. ... . .. . .. . . . ... ... .. . . .. .. . . . ... . ... .. . .. .. . . ... ..................... .. . ... . .... .. ... . .. ... .. . . . .. . ..... . . . ... ..... . .. . . . .. . ..... ..... ...... .. .... . .. . .... . ... . ... ... .... .. ...... .. . . .. . ... .. .... .. . . ... . .. .. . ...... .. .... ... . ... .... . . . .... . .. .. . .. . ...... . . .... . . .. . .... ... . .. . .. . .... .... .. . . . . .... ... . . . . .... . . . .. . . ... .... . .... . .. . .. . .... .... . . .. .. .... .... . . .. .. . .... . .. . . .... . . .. . . . ..... . .... . .. . . ........ . .. .... . . . .. .... . . . . .... .... . .. . . ..... .... . .. .... . ..... . . .. . . . ..... . . ..... .... . . . ..... . . . . ..... . .... .... . . . ...... . ...... .... . . . .. ... . .... ... . ...... . ..... ... . . . .. .... . . ..... .. . . . .. . ....... ........ . . . . .... .... ....... . . . ...... . . .. . ......... .. . . . . .. . ..... . . . . . ....... ........... . . .. . .. . ..... .. ....... . . . . . . ........ ........... . . . . .. .. ....... .. .... . .. . . .......... ....... . . . . . .. ........... .... . . . . .. ... . . ...... . . . .. .................... .. ..... .. . ... .... . . . ..... . .. . . ........... . . . .. . . .. ............ .. . ...... . . .. . . ..... . .. .. ............... . . .. .. . .. ... .. ..... .. ................................. ... .. . .. ...... . .. . .. .... . . .. . ....... .. . .. .. . ...... . ........... .. .. ............. .. . ........ . .. .. . .. . .. .. . . . . . ........... .. ... .. . . .. . ......... .. .. . .. . ... . . .. . .. ... . . . .. .. .. .................. ..................................... . ....... .. .. . .. .. . ................................ . . .. . . ..... .... .... ... .. . .. . .. .. . .. . .. .. . .. . .... ....................................................... . . .. .. . .. . .. . .. ............. . ....................................... . . .... ... .. . .. . .. .. . .. . .. .. . .. . . ....... .... ..

50 .. ..

Erlangk distributions

5 2 1

Figure 2.5: Erlangk distributions with mean value equal to one. The case k = 1 corresponds to an exponential distribution (density functions). The ith non-central moment is: mi = In particular, we have k (k + 1) . 2 The mean residual lifetime m1,r (x) for x 0 will be less than the mean value: m2 = m1,r (x) m1 , x 0. (2.52) (i + k 1)! (k 1)! 1
i

(2.51)

Using this distribution we have two parameters (, k) available to be estimated from observations. The mean value is often kept xed. To study the inuence of the parameter k, we normalize all Erlangk distributions to the same mean value as the Erlang1 distribution, i.e.

58

CHAPTER 2. TIME INTERVAL MODELING

the exponential distribution with mean value m1 = 1/, by replacing t by k t or by k : fk (t) dt = m1 = 2 = (kt)k1 kt e k dt , (k 1) ! 1 , 1 , k 2 1 . k (2.53)

(2.54) (2.55) (2.56)

= 1+

Notice that the form factor is independent of time scale. The density function (2.53) is illustrated in Fig. 2.5 for dierent values of k with = 1. The case k = 1 corresponds to the exponential distribution. When k we get a constant time interval ( = 1). By solving for f (t) = 0 we nd the maximum value at: t = k1 . k (2.57)

Steep distributions are named so because their distribution functions increase quicker from 0 to 1 than the exponential distribution do.

2.3.2

Random variables in parallel

We combine independent time intervals (random variables) by choosing the ith time interval with probability (weight factor) pi , pi = 1 .
i=1

The random variable of the weighted sum is called a compound distribution. The jth (noncentral) moment is obtained by weighting the (non-central) moments of the random variables: mj =
i=1

pi mj,i ,

(2.58)

where mj,i is the jth (non-central) moment of the distribution of the ith interval. The mean value becomes: m1 =
i=1

pi m1,i .

(2.59)

The second moment is: m2 =


i=1

pi m2,i ,

2.3. COMBINATION OF RANDOM VARIABLES


.... .. ............................................ ... ............................................ . .. .. .. .... .... .. .. . . .. .. .. . .. . .. .. .. . .. 2 . .. .. . .. . .. . .. ............................................. .............................................. .. ... . ..... ..... . .. . ... . .. ... . . . .. ... ... .. . .. . .. ... ... . .... .. .... .. .... .. .... ...... ...... .. ... . .............................. ............................. ... . . .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . ... ... ..... ... k .. ... ............................................. ............................................. .. . .. . ..

59
............. ............ ... .. .. .... . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .............. .............. ... .. .. ... ... .. .. ... ... .. ... ... .. ... ... .. . ... ... ... .. ... .. ... .. ... .. ... . ... . ... .. ... . .... .... .............................. . .......................... ... .. . .... .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. . .. .. ............. ............. .. .. .. ....

p1

2 . . .

. . .

Figure 2.6: By combining k exponential distributions in parallel and choosing branch number i with the probability pi , we get a hyperexponential distribution, which is a at distribution ( 2). and from this we get the variance: 2 = m2 m2 = 1
2 pi (i + m2 ) m2 , 1,i 1

(2.60)

i=1

2 where i , is the variance of the ith distribution.

The distribution function is as follows: F (t) =


i=1

pi Fi (t) .

(2.61)

A similar formula is valid for the density function: f (t) =


i=1

pi fi (t) .

Hyper-exponential or at distributions The general distribution function is in this case a weighted sum of exponential distributions (compound distribution) with a form factor 2 : F (t) =
0

1 et dW () ,

> 0,

t 0,

(2.62)

where the weight function may be discrete or continuous (Stieltjes integral). This distribution class corresponds to a parallel combination of the exponential distributions (Fig. 2.6). The density function is called complete monotone due to the alternating signs (Palm, 1957 [95]): (1) f () (t) 0 . (2.63)

60

CHAPTER 2. TIME INTERVAL MODELING

The mean residual life-time m1,r (x) for all x 0 is larger than the mean value: m1,r (x) m1 , Hyper-exponential distribution In this case W () is discrete. Suppose we have exponential distributions with the following intensities: 1 , 2 , . . . , k , and that W () has the positive increments: p 1 , p2 , . . . , p k , where
k

x 0.

(2.64)

pi = 1 .
i=1

(2.65)

For all other values W () is constant. In this case (2.62) becomes:


k

F (t) = 1

i=1

pi ei t ,

t 0.

(2.66)

The mean values and form factor is obtained from (2.59) and (2.60) (i = m1,i = 1/i ):
k

m1 =
i=1 k

pi , i 2 pi 2 i
k

(2.67)
2

=
i=1

i=1

pi i

2.

(2.68)

If k = 1 or all i are equal, then we get an exponential distribution. The distribution is called at because its distribution function increases more slowly from 0 to 1 than the exponential distribution. It is dicult to estimate more than one or two parameters (typically mean and variance) from real observations. The most common case in practise is n = 2 (p1 = p, p2 = 1 p): F (t) = 1 p e1 t (1 p) e2 t . (2.69) Statistical problems arise even when we have to estimate three parameters. So for practical applications we usually choose i = 2pi and thus reduce the number of parameters to only two: F (t) = 1 p e2pt (1 p) e2(1p)t . (2.70)

2.3. COMBINATION OF RANDOM VARIABLES The mean value and form factor (assuming p > 0) becomes: m1 = = 1 , 1 > 2. 2 p (1 p)

61

(2.71)

For this choice of parameters the two branches have the same contribution to the mean value. Fig. 2.7 illustrates an example.

10000

Number of observations

57055 observations m = 171.85 s form factor = 3.30 1000

100

10

10

15

20

25 Time

Figure 2.7: Probability density function for holding times observed on lines in a local exchange during busy hours.

62

CHAPTER 2. TIME INTERVAL MODELING

Pareto distribution and Palms normal forms W () can also be a continuous distribution, and this case was considered by Conny Palm (1943 [92]). In most important case W () is chosen to be gamma-distributed with mean value and form factor as follows: m1 = 1

= 1 + 0 / This corresponds to = 1/0 and k = /0 in (2.53). We then get 1 0


x 0
1 0

dW (x) =

dx ,

(2.72)

and from (2.62) it can be shown that: F (t) = 1 (1 + 0 t)


1+ 0

(2.73)

This distribution is called the Pareto-distribution. With the above choice of parameters the mean value and form factor of F (t) becomes: m1 = = 1 , 2 , 0 0 < 0 < . (2.74)

Note that the variance does not exist for 0 , and the distribution is called heavy-tailed (Sec. 2.4). This model is called Palms rst normal form, which has only two parameters (, 0 ). As a special case, letting 0 0 the gamma-distribution (2.72) becomes a constant and (2.73) becomes an exponential distribution. By weighting once more again using a gamma-distribution () the result is a time distribution with three parameters which is called Palms second normal form: 1 F (t) = 1 1 + 0 t 1 + ln (1 + 0 t) 0
1 (1+ )

0 > 0 , > 0 , t 0 .

(2.75)

The Pareto-distribution (2.73) is obtained from (2.75) by letting 0, or 0 0. If both and tend to zero, we get the exponential distribution. We return to the normal forms in Sec. 3.6.

2.3. COMBINATION OF RANDOM VARIABLES

63

2.3.3

Random variables in series and parallel

By combining exponential random variables in both series and parallel we get an almost general class of distributions. By weak convergence it can be shown that we in this way can approximate any distribution function with any degree of accuracy. For the derivations it is useful rst to consider the concept a stochastic sum (random sum).

Stochastic sum By a stochastic sum we understand the sum of a stochastic number of random variables (Feller, 1950 [32]). Let us consider a trunk group without congestion, where the arrival process and the holding times are stochastically independent. If we consider a xed time interval t, then the number of arrivals is a random variable N . In the following N is characterized by: N: density function p(i) = p{N = i} , mean value m1,n ,
2 variance n ,

i = 0, 1, 2, . . . , (2.76)

Arriving call number i has the holding time Ti . All Ti have the same distribution, and each arrival (request) will contribute with a certain number of time units (the holding times) which is a random variable characterized by: T : density function f (t) = p(t < T t + dt) , mean value m1,t ,
2 variance t ,

t 0, (2.77)

The total trac volume generated by all arrivals (requests) arriving within the considered time interval T is then a random variable itself: ST = T1 + T2 + + TN . (2.78)

In the following we assume that Ti and N are stochastically independent. This will be fullled when the congestion is zero. The following derivations are valid for both discrete and continuous random variables (summation is replaced by integration or vice versa). The stochastic sum becomes a combination of random variables in series and parallel as shown in Fig. 2.8 and dealt with in Sec. 2.3. For

64
p p p
1 2 3

CHAPTER 2. TIME INTERVAL MODELING


T1 T1 T1 T2 T2 T3

pi

T1

T2

Ti

Figure 2.8: A stochastic sum may be interpreted as a series/parallel combination of random variable.

a given branch i we nd (Fig. 2.8):

m1,i = i m1,t ,
2 2 i = i t , 2 m2,i = i t + (i m1,t )2 .

(2.79) (2.80) (2.81)

By summation over all possible values (branches) i we get:

m1,s =

i=1 i=1

p(i) m1,i p(i) i m1,t , (2.82)

m1,s = m1,t m1,n ,

2.3. COMBINATION OF RANDOM VARIABLES m2,s =


i=1 i=1

65

p(i) m2,i
2 p(i) {i t + (i m1,t )2 } ,

2 m2,s = m1,n t + m2 m2,n , 1,t 2 2 s = m1,n t + m2 m2,n (m1,t m1,n )2 , 1,t 2 2 2 s = m1,n t + m2 n . 1,t

(2.83)

(2.84)

We notice there are two contributions to the total variance: one term because the number 2 of calls is a random variable (n ), and a term because the duration of the calls is a random 2 variable (t ).
Example 2.3.2: Special case 1: N = n = constant (mn = n) m1,s = n m1,t ,
2 2 s = n t .

(2.85)

This corresponds to counting the number of calls at the same time as we measure the trac volume so that we can estimate the mean holding time. 2

Example 2.3.3: Special case 2: T = t = constant (mt = t) m1,s = m1,n t ,


2 2 s = t2 n .

(2.86)

If we change the scale from 1 to m1,t , then the mean value has to be multiplied by m1,t and the variance by m2 . The mean value m1,t = 1 corresponds to counting the number of calls. Thus the 1,t variance/mean ratio becomes m1,t times bigger. 2

Example 2.3.4: Stochastic sum As a non-teletrac example N may denote the number of rain showers during one month and Ti may denote the precipitation due to the ith shower. ST is then a random variable describing the total precipitation during a month. N may also for a given time interval denote the number of accidents registered by an insurance company and Ti denotes the compensation for the ith accident. ST then is the total amount paid by the company for the considered period. 2

The exponential distribution is the most important time distribution within teletrac theory. This time distribution is dealt with in Sec. 2.1.1.

66 Cox distributions (1 p0 ) q1 (1 p1 ) q2 (1 p2 ) 1 1

CHAPTER 2. TIME INTERVAL MODELING

qk (1 pk )

Figure 2.9: A Coxdistribution is a generalized Erlangdistribution having exponential distributions in both parallel and series. The phase-diagram is equivalent to Fig. 2.10. x p0 p1 p2 pk1 ..............................................................................................

....................................... ...................................... ... . ....................................... ................................... ... ... . ....................................... ...................................... ... . . . . .. .. .. .... .. .. .... . . . . . . 1 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 1 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. ... ... ... .. .. ... . . .. .. ... . . . . .. .. . . .. . . .. . . . . ..................... ............................................................... ..... ....................................................................................................................................... .... ................................................................ .. . . ..................... . . .. . .. .. .. .. .... ..

1p

1p

1p

. . . . k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . k1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ... ... ... ... . . .. .. . .. .. . . .. . . . ..................................................................................... .................... .. . .. . . ............................................................................. ............. .. . .. .. .. . .. .

1p

Figure 2.10: The phase diagram of a Cox distribution, cf. Fig. 2.9. By combining the steep and at distributions we obtain a general class of distributions (phase type distributions) which can be described with exponential phase in both series and parallel (e.g. a k matrix). To analyse a model with this kind of distributions, we can apply the theory of Markov processes, for which we have powerful tools as the phase-method. In the more general case we can allow for loop back between the phases. We shall only consider Cox-distributions as shown in Fig. 2.9 (Cox, 1955 [18]). These also appear under the name of Branching Erlang distribution. The mean value and variance of this Cox distribution (Fig. 2.10) are found from the formulae in Sec. 2.3 for random variables in series and parallel as shown in g. 2.9:
k i

m1 =
i=1

qi (1 pi )

j=1

1 j

(2.87)

2.3. COMBINATION OF RANDOM VARIABLES where

67

The term qi (1 pi ) is the probability of jumping out after being in ith phase. It can be shown that the mean value can be expressed by the simple form:
k

qi = p0 p1 p2 pi1 .

(2.88)

m1 =
i=1

qi = i

m1,i ,
i=1

(2.89)

where m1,i = qi /i is the ith phase related mean value. The second moment becomes:
k

m2 =
i=1 k

{qi (1 pi ) m2,i } qi (1 pi )
i

=
i=1

j=1

1 + 2 j

j=1

1 j

(2.90)

2 where m2,i is obtained from (2.11): m2,i = 2,i + m2 . It can be shown that this can be 1,i written as: k i 1 qi m2 = 2 . (2.91) j i i=1 j=1

From this we get the variance (2.11): 2 = m2 m2 . 1 The addition of two Coxdistributed random variables yields another Cox-distributed variable, i.e. this class is closed under the operation of addition. The distribution function of a Cox distribution can be written as a sum of exponential functions:
k k

1 F (t) =

i=1

ci e

i t

where 0

i=1

ci 1 ,

< ci < + .

(2.92)

Polynomial trial The following properties are of importance for later applications. If we consider a point of time chosen at random within a Coxdistributed time interval, then this point is within phase i with probability: mi , i = 1, 2, . . . , k . (2.93) i = m1 If we repeat this experiment y (independently) times, then the probability that phase i is observed yi times is given by multinomial distribution (= polynomial distribution): p{y1 , y2 , . . . , yk | y} = y y1 , y2 , . . . , yk
y1 1

y2 2

...

yk k

(2.94)

68 where

CHAPTER 2. TIME INTERVAL MODELING

yi = y ,
i=1

and

y y1 , y2 , . . . , yk

y! . y1 ! y2 ! yk !

(2.95)

This (2.95) is called the multinomial coecient. By the property of lack of memory of the exponential distributions (phases) we have full information about the residual life-time, when we know the number of the actual phase. By the multinomial theorem we have by summation over all possible states: (
1

+ ... +

k)

=1=
yi =y

y y1 , y2 , . . . , yk
i i

y1 1

y2 2

...

yk k

(2.96)

The multinomial theorem is also valid for theorem (2.36).

= 1. It is a generalization of the binomial

Decomposition principles Phasediagrams are a useful tool for analyzing Cox distributions. The following is a fundamental characteristic of the exponential distribution (Iversen & Nielsen, 1985 [46]): Theorem 2.1 An exponential distribution with intensity can be decomposed into a twophase Cox distribution, where the rst phase has an intensity > and the second phase intensity (Fig. 2.11). According to Theorem 2.1 a hyperexponential distribution with phases is equivalent to a Cox distribution with the same number of phases. The case = 2 is shown in Fig. 2.13. We have another property of Cox distributions (Iversen & Nielsen, 1985 [46]): Theorem 2.2 The phases in any Cox distribution can be ordered such as i i+1 . Theorem 2.1 shows that an exponential distribution is equivalent to a homogeneous Cox distribution (homogeneous: same intensities in all phases) with intensity m and an innite number of phases (Fig. 2.11). We notice that the branching probabilities are constant. Fig. 2.12 corresponds to a weighted sum of Erlangk distributions where the weighting factors are geometrically distributed.

2.3. COMBINATION OF RANDOM VARIABLES

69

.. ..................... ................... ... .. ..

.. ..................... .............. ..... .. .. .. .

.. ..................... .............. ..... .. . .. .. .

.. . .. ............................................. ............................................. . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ... ... .. ... . .. .. . .. .. . .. . . . . .......................................................................................... . . .................................................................... ................... . . .. . .. ...

Figure 2.11: An exponential distribution with rate is equivalent to the shown Cox2 distribution (Theorem 2.1).

..... . .. ...................... .................... ..

.. . .. . .. .. ...................................... ...................................... ...................................... ...................................... . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... . .. .. . . . .. . .. .. .. . . ........................................................................................... ... .. . . ......................................................................................... . . .. .. . .. .. .... ....

1p

1p

1p ............................................................................................

p=

.. .. ............................................. ............................................. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... . .. .. . . .. . .. .. . .. . . . ............................ ........................................................................ .. . .... .. . ....................................................................................................... . ...... . ...... .. . . .. .. ....

1p

Figure 2.12: An exponential distribution with rate is by successive decomposition transformed into a compound distribution of homogeneous Erlangk distributions with rates > , where the weighting factors follows a geometric distribution (quotient p = /).

.. .................................. ................................. .. . ....

2 ) .. ......... ......................................................................................1................ ...................................................................................................... . q = (1 p1 )(1

.................... ................... . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . 1 1 . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... . ... ... . .. .. . .. .. .. .. .. . ............................................................................................................................................................... .. . . . ............................................................................................................................. ............................... . . ... . .. .

p = p + (1 p )

Figure 2.13: A hyperexponential distribution (Fig. 2.6) with two phases (1 > 2 , p2 = 1 p1 ) can be transformed into a Cox2 distribution.

70

CHAPTER 2. TIME INTERVAL MODELING

By using phase diagrams it is easy to see that any exponential time interval () can be decomposed into phase-type distributions (i ), where i . Referring to Fig. 2.14 we notice that the rate out of the macro-state (dashed box) is independent of the micro state. When the number of phases k is nite and there is no feedback the nal phase must have rate .

... .. ........................................ ........................................ .. .

1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... .. .. .. . . . . .

.... .. ........................... .... .... .......................... ... ... ... ..

1p1

.... ... ........................... .......................... .. ..

i
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... .. .. . .. . . . .

.... ... ........................... .... .... .......................... ... ... .. ..

1pi

.... ... ........................... .......................... .. ..

k =
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . .. . .. . . .

p1

pi

pk = 1

Figure 2.14: This phase-type distribution is equivalent to a single exponential when pi i = . Thus i as 0 < pi 1.

Importance of Cox distribution Cox distributions have attracted a lot of attention during recent years. They are of great importance due to the following properties: a. Cox distribution can be analyzed using the method of phases. b. One can approximate an arbitrary distribution arbitrarily well with a Cox distribution. If a property is valid for a Cox distribution, then it is valid for any distribution of practical interest.

By using Cox distributions we can with elementary methods obtain results which previously required very advanced mathematics. In the connection with practical applications of the theory, we have used the methods to estimate the parameters of Cox distribution. In general there are 2 k parameters in an unsolved statistical problem. Normally, we may choose a special Cox distribution (e.g. Erlangk or hyperexponential distribution) and approximate the rst moment. By numerical simulation on computers using the Roulette method, we automatically obtain the observations of the time intervals as Cox distribution with the same intensities in all phases.

2.4. OTHER TIME DISTRIBUTIONS

71

2.4

Other time distributions

In principle, every distribution which has nonnegative values, may be used as a time distribution to describe the time intervals. For distributions which are widely applied in the queueing theory, we have the following abbreviated notations (cf. Sec. 10.1):

M Ek Hn D Cox G

Exponential distribution (Markov), Erlang-k distribution, Hyper-exponential distribution of order n, Constant (Deterministic), Cox distribution, General = arbitrary distribution.

Gamma distribution If we suppose the parameter k in Erlang-k distribution (2.46) takes non-negative real values, then we obtain the gamma distribution: f (t) = 1 (t)k1 et , (k) > 0, t 0. (2.97)

The mean value and variance are given in (2.48) and (2.49).

Weibull distribution A distribution also known in teletrac theory is the Weibull distribution We(k, ): F (t) = 1 e(t) ,
k

t 0,

k > 0,

> 0.

(2.98)

This distribution has a time-dependent death intensity (2.21): dF (t) e(t) k (t)k1 dt = (t) = 1 F (t) e(t)k = k (t)k1 . (2.99)
k

The distribution has its origin in the reliability theory. For k = 1 we get the exponential distribution.

72 Heavy-tailed distributions

CHAPTER 2. TIME INTERVAL MODELING

To describe data with big variations we often use heavy-tailed distributions. A distribution is heavy-tailed in strict sense if the tail of the distribution function behaves as a power law, i.e. as 1 F (t) t , 0 < 2 . The Pareto distribution (2.73) is heavy-tailed in strict sense. Sometimes distributions with a tail more heavy than the exponential distribution are also classied as heavy-tailed. Examples are hyper-exponential, Weibull, and log-normal distributions. Another class of distributions is sub-exponential distribution. These subjects are dealt with in the literature. Later, we will deal with a set of discrete distributions, which also describes the lifetime, such as geometrical distribution, Pascal distribution, Binomial distribution, Westerberg distribution, etc. In practice, the parameters of distributions are not always stationary. The service (holding) times can be physically correlated with the state of the system. In manmachine systems the service time changes because of busyness (decrease) or tiredness (increase). In the same way, electromechanical systems work more slowly during periods of high load because the voltage decreases.

2.5

Observations of life-time distribution

Fig. 2.7 shows an example of observed holding times from a local telephone exchange. The holding time consists of both signalling time and, if the call is answered, conversation time. Fig. 3.4 shows observation and interarrival times of incoming calls to a transit telephone exchange during one hour. A particular outcome of a random variable is called a random variate. Thus observations of holding times are variates of a random variable we want to model. From its very beginning, the teletrac theory has been characterized by a strong interaction between theory and practice, and there has been excellent possibilities to carry out measurements. Erlang (1920, [12]) reports a measurement where 2461 conversation times were recorded in a telephone exchange in Copenhagen in 1916. Palm (1943 [92]) analyzed the eld of trac measurements, both theoretically and practically, and implemented extensive measurements in Sweden. By the use of computer technology a large amount of data can be collected. The rst stored

2.5. OBSERVATIONS OF LIFE-TIME DISTRIBUTION

73

program controlled by a mini-computer measurement is described in (Iversen, 1973 [40]). The importance of using discrete values of time when observing values is dealt with in Chapter 13. Bolotin (1994, [7]) has measured and modelled telecommunication holding times. Numerous measurements on computer systems have been carried out. Where in telephone systems we seldom have a form factor greater than 6, we observe form factors greater than 100 in data trac. This is the case for example for data transmission, where we send either a few characters or a large quantity of data. More recent extensive measurements have been performed and modeled using self-similar trac models (Jerkins & al., 1999 [59]). These subjects are dealt with in more advanced chapters. For more advanced modelling Laplace transforms and Z-transforms are widely used.
Updated: 2010-02-17

74

CHAPTER 2. TIME INTERVAL MODELING

Chapter 3 Arrival Processes


Arrival processes, such as telephone calls arriving to a switching system or messages arriving to a server are mathematically described as stochastic point processes. For a point process, we have to be able to distinguish two arrivals from each other. Information concerning the single arrival (e.g. service time, number of customers) are ignored. Such information can only be used to determine whether an arrival belongs to the process or not. The mathematical theory for point process was founded and developed by the Swede Conny Palm during the 1940es. This theory has been widely applied in many elds. It was mathematically rened by Khintchine ([71], 1968), and is widely applicable in many elds. The Poisson process is the most important point process. Later we will realize that its role among point processes is as fundamental as the role of the Normal distribution among statistical distributions. By the central limit theorem we obtain the Normal distribution when adding random variables. In a similar way we obtain the exponential distribution when superposing stochastic point processes. Most other applied point processes are generalizations or modications of the Poisson process. This process gives a surprisingly good description of many reallife processes. This is because it is the most random process. The more complex a process is, the better it will in general be modeled by a Poisson process. Due to its great importance in practice, we shall study the Poisson process in detail in this chapter. First (Sec. 3.5) we base our study on a physical model with main emphasis upon the distributions associated to the process, and then we shall consider some important properties of the Poisson process (Sec. 3.6). Finally, in Sec. 3.7 we consider the interrupted Poisson process and the Batched Poisson process as examples of generalization.

76

CHAPTER 3. ARRIVAL PROCESSES

120 110 100 90 80 70 60 50 40 30 20 10 0

Accumulated number of calls

10

20

30

40

50

60 Time [s]

Figure 3.1: The call arrival process at the incoming lines of a transit exchange.

3.1

Description of point processes

In the following we only consider simple point processes, i.e. we exclude multiple arrivals as for example twin arrivals. For telephone calls this may be realized by a choosing sucient detailed time scale. Consider arrival times where the ith call arrives at time Ti : 0 = T0 < T1 < T2 < . . . < Ti < Ti+1 < . . . . The rst observation takes place at time T0 = 0. The number of calls in the half-open interval [0, t[ is denoted as Nt . Here Nt is a random variable with continuous time parameters and discrete space. When t increases, Nt never decreases. The time distance between two successive arrivals is: Xi = Ti Ti1 , i = 1, 2, . . . . (3.2) (3.1)

This is called the inter-arrival time, and the distribution of this interval is called the interarrival time distribution. Corresponding to the two random variables Nt and Xi , a point process can be characterized in two ways:

3.1. DESCRIPTION OF POINT PROCESSES

77

1. Number representation Nt : time interval t is kept constant, and we observe the random variable Nt for the number of calls in t. 2. Interval representation Ti : number of arriving calls n is kept constant, and we observe the random variable Ti for the time interval until there has been n arrivals (especially T1 = X1 ). The fundamental relationship between the two representations is given by the following simple relation: n Nt < n if and only if Tn =
i=1

Xi t ,

n = 1, 2, . . .

(3.3)

This is expressed by Feller-Jensens identity: p {Nt < n} = p {Tn t} , n = 1, 2, . . . (3.4) Analysis of point process can be based on both of these representations. In principle they are equivalent. Interval representation corresponds to the usual time series analysis. If we for example let i = 1, we obtain call averages, i.e. statistics on a per-call basis. Number representation has no parallel in time series analysis. The statistics we obtain are averaged over time and we get time averages, i.e. statistics on a per time unit basis (cf. the dierence between call congestion and time congestion). The statistics of interests when studying point processes can be classied according to the two representations.

3.1.1

Basic properties of number representation

There are three properties which are of interest: 1. The total number of arrivals in interval [t1 , t2 [ is equal to Nt2 Nt1 . The average number of calls in the same interval is called the renewal function H: H (t1 , t2 ) = E {Nt2 Nt1 } . 2. The density of arriving calls at time t (time average) is: Nt+t Nt (3.6) = Nt . t0 t We assume that t exists and is nite. We may interpret t as the intensity by which arrivals occur at time t (cf. Sec. 2.2.2). For simple or ordinary point processes, we have: t = lim p {Nt+t Nt 2} = o(t) , p {Nt+t Nt = 1} = t t + o(t) , p {Nt+t Nt = 0} = 1 t t + o(t) , where by denition: o(t) = 0. t0 t lim (3.10) (3.7) (3.8) (3.9) (3.5)

78

CHAPTER 3. ARRIVAL PROCESSES 3. Index of Dispersion for Counts IDC. To describe second order properties of the number representation we use the index of dispersion for counts, IDC. This describes the variations of the arrival process during a time interval t and is dened as: IDC = Var{Nt } . E{Nt } (3.11)

By dividing the time interval t into x intervals of duration t/x and observing the number of events during these intervals we obtain an estimate of IDC(t). For the Poisson process IDC becomes equal to one. IDC is equal to peakedness, which we later introduce to characterize the number of busy channels in a trac process (4.7).

3.1.2

Basic properties of interval representation

Also here we have three properties of interest. 4. The probability density function f (t) of time intervals Xi (3.2) (and by convolving the distribution by itself i1 times, the distribution of the time until the ith arrival). Fi (t) = p {Xi t} , E {Xi } = m1,i . (3.12) (3.13)

The mean value is a call average. A renewal process is a point process, where sequential inter-arrival times are stochastic independent to each other and have the same distribution, i.e. m1,i = m1 (I ID = I dentically and I ndependently Distributed). 5. The distribution function (pdf) V (t) of the time interval from a random point (epoch) of time until the rst arrival occurs. The mean value of V (t) is a time average, which is calculated per time unit. 6. Index of Dispersion for Intervals, IDI. To describe second order properties for the interval representation we use the Index of Dispersion for Intervals, IDI. This is dened as: IDI = Var{Xi } = 1, E{Xi }2 (3.14)

where Xi is the inter-arrival time. For the Poisson process, which has exponentially distributed service times, IDI becomes equal to one. IDI is equal to Palms form factor minus one (2.13). In general, IDI is more dicult to obtain from observations than IDC, and more sensitive to the accuracy of measurements and smoothing of the trac process. The digital technology is more suitable for observation of IDC, whereas it complicates the observation of IDI (Chap. 13).

3.1. DESCRIPTION OF POINT PROCESSES

79

Which one of the two representations to use in practice, depends on the actual case. This can be illustrated by the following examples.
Example 3.1.1: Measuring principles Measures of teletrac performance are carried out by one of the two basic principles as follows: 1. Passive measures. Measuring equipment records at regular time intervals the number of arrivals since the last recording. This corresponds to the scanning method, which is suitable for computers. This corresponds to the number representation where the time interval is xed. 2. Active measures. Measuring equipment records an event at the instant it takes place. We keep the number of events xed and observe the measuring interval. Examples are recording instruments. This corresponds to the interval representation, where we obtain statistics for each single call. 2

Example 3.1.2: Test calls Investigation of the trac quality. In practice this is done in two ways: 1. The trac quality is estimated by collecting statistics of the outcome of test calls made to specic (dummy) subscribers. The calls are generated during busy hour independently of the actual trac. The test equipment records the number of blocked calls etc. The obtained statistics corresponds to time averages of the performance measure. Unfortunately, this method increases the oered load on the system. Theoretically, the obtained performance measures will dier from the correct values. 2. The test equipments collect data from call number N, 2N, 3N, . . ., where for example N = 1000. The trac process is unchanged, and the performance statistics is a call average. 2

Example 3.1.3: Call statistics A subscriber evaluates the quality by the fraction of calls which are blocked, i.e. call average. The operator evaluates the quality by the proportion of time when all trunks are busy, i.e. time average. The two types of average values (time/call) are often mixed up, resulting in apparently conicting statement. 2

Example 3.1.4: Called party busy (B-Busy) At a telephone exchange typically 10% of the subscribers are busy, but 20% of the call attempts are blocked due to B-busy (called party busy). This phenomenon can be explained by the fact that half of the subscribers are passive (i.e. make no call attempts and receive no calls), whereas 20% of the remaining subscribers are busy. G. Lind (1976 [82]) analyzed the problem under the assumption that each subscriber on the average has the same number of incoming and outgoing calls. If mean value and form factor of the distribution of trac per subscriber is b and , respectively, then the probability that a call attempts get B-busy is b . 2

80

CHAPTER 3. ARRIVAL PROCESSES

3.2

Characteristics of point process

Above we have discussed a very general structure for point processes. For specic applications we have to introduce further properties. Below we only consider number representation, but we could do the same based on the interval representation.

3.2.1

Stationarity (Time homogeneity)

Regardless of the position on the time axis, then the probability distributions describing the point process are independent of the instant of time. The following denition is useful in practice: Denition: For an arbitrary t2 > 0 and every k 0, the probability that there are k arrivals in [t1 , t1 + t2 [ is independent of t1 , i.e. for all t, k we have: p {Nt1 +t2 Nt1 = k} = p {Nt1 +t2 +t Nt1 +t = k} . There are many other denitions of stationarity, some stronger, some weaker. Stationarity can also be dened by interval representation by requiring all Xi to be independent and identically distributed (IID). A weaker denition is that all rst and second order moments (e.g. the mean value and variance) of a point process must be invariant with respect to time shifts. Erlang introduced the concept of statistical equilibrium, which requires that the derivatives of the process with respect to time are zero. (3.15)

3.2.2

Independence

This property can be expressed as the requirement that the future evolution of the process only depends upon the actual state. Denition: The probability that k events (k is integer and 0) take place in [t1 , t1 + t2 [ is independent of events before time t1 p {Nt2 Nt1 = k|Nt1 Nt0 = n} = p {Nt2 Nt1 = k} (3.16)

If this holds for all t, then the process is a Markov process; the future evolution only depends on the present state, but is independent of how this has been obtained. This is the lack of memory property. If this property only holds for certain time points (e.g. arrival times), these points are called equilibrium points or regeneration points. The process then has a limited memory, and we only need to keep record of the past back the the latest regeneration point.

3.2. CHARACTERISTICS OF POINT PROCESS


Example 3.2.1: Equilibrium points = regeneration points Examples of point process with equilibrium points.

81

a) Poisson process is (as we will see in next chapter) memoryless, and all points of the time axes are equilibrium points. b) A scanning process, where scans occur at a regular cycle, has limited memory. The latest scanning instant has full information about the scanning process, and therefore all scanning points are equilibrium points. c) If we superpose the above-mentioned Poisson process and scanning process (for instance by investigating the arrival processes in a computer system), the only equilibrium points in the compound process are the scanning instants. d) Consider a queueing system with Poisson arrival process, constant service time and single server. The number of queueing positions can be nite or innite. Let a point process be dened by the time instants when service starts. All time intervals when the system is idle, will be equilibrium points. During periods, where the system is busy, the time points for acceptance of new calls for service depends on the instant when the rst call of the busy period started service. 2

3.2.3

Simplicity or ordinarity

We have already mentioned (3.7) that we exclude processes with multiple arrivals. Denition: A point process is called simple or ordinary, if the probability that there are more than one event at a given point is zero: p {Nt+t Nt 2} = o(t) . (3.17)

With interval representation, the inter-arrival time distribution must not have a probability mass (atom) at zero, i.e. the distribution is continuous at zero (2.2): F (0+) = 0 (3.18)

Example 3.2.2: Multiple events Time points of trac accidents will form a simple process. Number of damaged cars or dead people will be a non-simple point process with multiple events. 2

82

CHAPTER 3. ARRIVAL PROCESSES

3.3

Littles theorem

This is the only general result that is valid for all queueing systems. It was rst published by Little (1961 [84]). The proof below was shown by applying the theory of stochastic process in (Eilon, 1969 [25]). We consider a queueing system, where customers arrive according to a stochastic process. Customers enter the system at a random time and wait to get service, after being served they leave the system. In Fig. 3.2, both arrival and departure processes are considered as stochastic processes with cumulated number of customers as ordinate. We consider a time space T and assume that the system is in statistical equilibrium at initial time t = 0. We use the following notation (Fig. 3.2): N (T ) = A(T ) = number of arrivals in period T . the total service times of all customers in the period T = the shadowed area between curves = the carried trac volume.
N (T ) T A(T ) N (T ) A(T ) T

(T )

= the average call intensity in the period T . = mean holding time in system per call in the period T . = the average number of calls in the system in the period T .

W (T ) = L(T ) =

We have the important relation among these variables: L(T ) = If the limits of = lim (T ) and W = lim W (T )
T T

A(T ) W (T ) N (T ) = = (T ) W (T ) T T

(3.19)

exist, then the limiting value of L(T ) also exists and it becomes: L=W (Littles theorem). (3.20)

This simple formula is valid for all general queueing system. The proof had been rened during the years. We shall use this formula in Chaps. 912.
Example 3.3.1: Littles formula If we only consider the waiting positions, the formula shows: The mean queue length is equal to call intensity multiplied by the mean waiting time.

3.4. CHARACTERISTICS OF THE POISSON PROCESS

83

If we only consider the servers, the formula shows: The carried trac is equal to arrival intensity multiplied by mean service time (A = y s = /). This corresponds to the denition of oered trac in Sec. 1.7. 2

Number of events 9 8 7 6 5 4 3 2 1 0 0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Arrival process............................ . . . . ...


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ............ ........... .

. .

Departure process

Time

Figure 3.2: A queueing system with arrival and departure of customers. The vertical distance between the two curves is equal to the actual number of customers being served. The customers in general dont depart in the the same order as they arrive, so the horizontal distance between the curves dont describe the actual time in the system of a customer.

3.4

Characteristics of the Poisson process

The fundamental properties of the Poisson process are dened in Sec. 3.2: a. Stationary, b. Independent at all time instants (epochs), and c. Simple.

84

CHAPTER 3. ARRIVAL PROCESSES

(b) and (c) are fundamental properties, whereas (a) can be relaxed. We may allow a Poisson process to have a timedependent intensity. From the above properties we may derive other properties that are sucient for dening the Poisson process. The two most important ones are: Number representation: The number of events within a time interval of xed length is Poisson distributed. Therefore, the process is named the Poisson process. Interval representation: The time distance Xi (3.2) between consecutive events is exponentially distributed.

In this case using (2.46) and (2.47) FellerJensens identity (3.4) shows the fundamental relationship between the cumulated Poisson distribution and the Erlang distribution (Sec. 3.5.2):
n1

j=0

(t)j t e = j!

x=t

(x)n1 ex dx = 1 F (t) . (n 1)!

(3.21)

This formula can also be obtained by repeated partial integration.

3.5

Distributions of the Poisson process

In this section we consider the Poisson process in a dynamical and physical way (Fry, 1928 [35]) & (Jensen, 1954 [12]). The derivations are based on a simple physical model and focus upon the probability distributions associated with the Poisson process. The physical model is as follows: Events (arrivals) are placed at random on the real axis in such a way that every event is placed independently of all other events. So we put the events uniformly and independently on the real axes. The average density is chosen as events (arrivals) per time unit. If we consider the axis as a time axis, then on the average we shall have arrivals per time unit. The probability that a given arrival pattern occurs within a time interval is independent of the location of the interval on the time axis.
.... .. ........................................................................................................................................................................................................................ ....................................................................................................................................................................................................................... . .. . . . . . . . . . . . . . . . .... .... .. .. . .. .. .. ... ..... ... ........... ........... ............ ........... .. ................................. ................................. .................................. ................................. .. . .

t1

t2

Time

Figure 3.3: When deriving the Poisson process, we consider arrivals within two non overlapping time intervals of duration t1 and t2 , respectively. Let p(, t) denote the probability that events occur within a time interval of duration t. The mathematical formulation of the above model is as follows:

3.5. DISTRIBUTIONS OF THE POISSON PROCESS

85

1. Independence: Let t1 and t2 be two nonoverlapping intervals (Fig. 3.3), then because of the independence assumption we have: p (0, t1 ) p (0, t2 ) = p (0, t1 + t2 ) . (3.22)

2. We notice that (3.22) implies that the event no arrivals within the interval of length 0 has the probability one: p(0, 0) = 1 . (3.23) 3. The mean value of the time interval between two successive arrivals is 1/ (2.7):
0

p(0, t) dt =

1 ,

0<

1 < .

(3.24)

Here p(0, t) is the probability that there are no arrivals within the time interval (0, t), which is identical to the probability that the time until the rst event is larger than t (the complementary distribution function). The mean value (3.24) is obtained directly from (2.7). Formula (3.24) can also be interpreted as the area under the curve p(0, t), which is a non-increasing function decreasing from 1 to 0. 4. We also notice that (3.24) implies that the probability of no arrivals within a time interval of length is zero as it never takes place: p(0, ) = 0 . (3.25)

3.5.1

Exponential distribution

The fundamental step in the following derivation of the Poisson distribution is to derive p(0, t) which is the probability of no arrivals within a time interval of length t, i.e. the probability that the rst arrival appears later than t. We will show that {1 p(0, t) = F (t)} is an exponential distribution (cf. Sec. 2.1.1). From (3.22) we have: ln p (0, t1 ) + ln p (0, t2 ) = ln p (0, t1 + t2 ) . Letting ln p(0, t) = f (t), (3.26) can be written as: f (t1 ) + f (t2 ) = f (t1 + t2 ) . By dierentiation with respect to e.g. t2 we have: f (t2 ) = ft2 (t1 + t2 ) . (3.27) (3.26)

86

CHAPTER 3. ARRIVAL PROCESSES

From this we notice that f (t) must be a constant and therefore: f (t) = a + b t . By inserting (3.28) into (3.27), we obtain a = 0. Therefore p(0, t) has the form: p(0, t) = ebt . From (3.24) we obtain b : 1 = or: Thus on the basis of item (1) and (2) above we have shown that: p(0, t) = et . (3.29) b = .
0 0

(3.28)

p(0, t) dt =

1 ebt dt = , b

If we consider p(0, t) as the probability that the next event arrives later than t, then the time until next arrival is exponentially distributed (Sec. 2.1.1): 1 p(0, t) = F (t) = 1 et , F (t) = f (t) = et , We have the following mean value and variance (??): m1 = 2 = 1 , 1 . 2 (3.32) > 0, > 0, t 0, t 0. (3.30) (3.31)

The probability that the next arrival appears within the interval (t, t + dt) may be written as: f (t) dt = et dt = p(0, t) dt , (3.33)

i.e. the probability that an arrival appears within the interval (t, t + dt) is equal to dt, independent of t and proportional to dt (2.24). Because is independent of the actual age t, the exponential distribution has no memory (cf. Secs. 2.1.1 & 2.2.2). The process has no age. The parameter is called the intensity or rate of both the exponential distribution and of the related Poisson process and it corresponds to the intensity in (3.6). The exponential distribution is in general a very good model of call inter-arrival times when the trac is generated by human beings (Fig. 3.4).

3.5. DISTRIBUTIONS OF THE POISSON PROCESS Number of observations 2000 1000 500 200 100 50 20 10 5 2 1 0 4 8 12 16 20 Interarrival time [scan=0.2s] 5916 Observations = Theory

87

Figure 3.4: Interarrival time distribution of calls at a transit exchange. The theoretical values are based on the assumption of exponentially distributed interarrival times. Due to the measuring principle (scanning method) the continuous exponential distribution is transformed into a discrete Westerberg distribution (13.14) (2 -test = 18.86 with 19 degrees of freedom, percentile = 53).

3.5.2

Erlangk distribution

From the above we notice that the time until exactly k arrivals have appeared is a sum of k IID (independently and identically distributed) exponentially distributed random variables. The distribution of this sum is an Erlangk distribution (Sec. 2.3.1) and the density is given by (2.46):

fk (t) dt =

(t)k1 t e dt , (k 1)!

> 0,

t 0,

k = 1, 2, . . . .

(3.34)

88

CHAPTER 3. ARRIVAL PROCESSES

The mean value and the variance are obtained in (2.48) (2.52): from (3.32): m1 = 2 = k , k , 2 1 . k (3.35)

= 1+

Example 3.5.1: Call statistics from an SPC-system (cf. Example 3.1.2) Let calls arrive to a stored programcontrolled telephone exchange (SPCsystem) according to a Poisson process. The exchange automatically collects full information about every 1000th call. The inter-arrival times between two registrations will then be Erlang1000 distributed and have the form factor = 1.001, i.e. the registrations will take place very regularly. 2

Number of observations 150 900 observations = 6.39 calls/s = theory

125

100

75

50

25

0 0 2 4 6 8 10 12 14 16 18 Number of calls/s

Figure 3.5: Number of Internet dial-up calls per second. The theoretical values are based on the assumption of a Poisson distribution. A statistical test accepts the hypothesis of a Poisson distribution.

3.5. DISTRIBUTIONS OF THE POISSON PROCESS

89

3.5.3

Poisson distribution

We shall now show that the number of arrivals in an interval of xed length t is Poisson distributed with mean value t. When we know the above-mentioned exponential distribution and the Erlang distribution, the derivation of the Poisson distribution is only a matter of applying simple combinatorics. The proof can be carried through by induction. We want to derive p(i, t) = probability of i arrivals within a time interval t. Let us assume that: (t)i1 t e , > 0 , i = 1, 2, . . . p(i 1, t) = (i 1)!

This is correct for i = 1 (3.29). The interval (0, t) is divided into three nonoverlapping intervals (0, t1 ) , (t1 , t1 + dt1 ) and (t1 + dt1 , t). From the earlier independence assumption we know that events within an interval are independent of events in the other intervals, because the intervals are nonoverlapping. By choosing t1 so that the last arrival within (0, t) appears in (t1 , t1 + dt1 ), then the probability p(i, t) is obtained by integrating over all possible values of t1 as a product of the following three independent probabilities: a) The probability that (i 1) arrivals occur within the time interval (0, t1 ): (t1 )i1 t1 p (i 1, t1 ) = e , (i 1)! 0 t1 t .

b) The probability that there is just one arrival within the time interval from t1 to t1 + dt1 : dt1 . c) The probability that no arrivals occur from t1 + dt1 to t: e(tt1 ) . The product of the rst two probabilities is the probability that the ith arrival appears in (t1 , t1 + dt1 ), i.e. the Erlang distribution from the previous section. By integration we have:
t

p(i, t) =
0

(t1 )i1 t1 e dt1 e(tt1 ) (i 1)!


t 0

i et (i 1)! (t)i t e , i!

t1i1 dt1 ,

p(i, t) =

i = 0, 1, . . .

> 0.

(3.36)

90

CHAPTER 3. ARRIVAL PROCESSES

This is the Poisson distribution which we thus have obtained from (3.29) by induction. The mean value and variance are: m1 = t , 2 = t . (3.37) (3.38)

The Poisson distribution is in general a very good model for the number of calls in a telecommunication system (Fig. 3.5) or jobs in a computer system. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
............................................................................................................................................... ............................................................................................................................................... . .. . .. . .. . .. . .. . .. . . . . .. .. .. . .. . .. . .. . .. .. .. . .. . .. . .. .. .. . .. . .. . .. . .. . .. . .. . .. . .. .. .. . .. . .. . .. . .. . . .. . .. . .. . .. . .. ........................................ ........................................... . ........ ....... ...... ........ .. . ....... ..... ..... ....... . . ....... .... ....... .... . ....... ....... .... .. .... . .. ....... ....... . ... .. ... . ...... . ...... . . ...... ... . ...... ... . ...... ...... . ... .. ..... ...... . ...... . ....... ....... .. .... .. ....... ....... . ... .. ... ....... ....... .. .. ....... ....... .. ... ....... ....... . . .. .... .. ....... ....... ........ ............... . .. .. ............ .................... ........ ........ ....... ..... .......... ....... ........ ....... ....... . .. ... .... ..... . ...... ...... ... ... ...... ...... ........ ....... ...... ..... ...... ....... ....... ..... .. ....... ....... ..... . ........ ........ ... .. .... .. ........ ........ ......... .. .. .......... ... . ........... .. ........... . .. ............. ... ............... . .. ................... .. .................... ............................. .. ................................. .. . .................... ........... . . .

Carried trac

Ideal

Slotted Aloha

Simple Aloha

0.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 Oered trac Figure 3.6: The carried trac in a slotted Aloha system has a maximum throughput twice the maximum throughput of the simple Aloha system (example 3.5.2). The Simple Aloha protocol is dealt with in example 4.2.1.

Example 3.5.2: Slotted Aloha Satellite System Let us consider a digital satellite communication system with constant packet length h. The satellite is in a geostationary position about 36.000 km above equator, so the round trip delay is about 280 ms. The time axes is divided into slots of xed duration corresponding to the packet length h. The individual terminal (earth station) transmits packets so that they are synchronized with the time slots. All packets generated during a time slot are transmitted in the next time-slot. The transmission of a packet is only correct if it is the only packet being transmitted in a time slot. If more packets are transmitted simultaneously, we have a collision and all packets are lost and must be retransmitted. All earth stations receive all packets and can thus decide whether a packet is transmitted correctly. Due to the time delay, the earth stations transmit packets independently. If the total arrival process is a Poisson process (rate ), then we get a Poisson distributed number of

3.6. PROPERTIES OF THE POISSON PROCESS


packets in each time slot. p(i) = The probability of a correct transmission is: p(1) = h eh .

91

(h)i h e . i!

(3.39)

(3.40)

This corresponds to the proportion of the time axes which is utilized eectively. This function, which is shown in Fig. 3.6, has an optimum when the derivative with respect to h is zero: ph (1) = eh (1 h) = 0 , h = 1 . Inserting this value in (3.40) we get: max{p(1)} = e1 = 0.3679 . (3.42) (3.41)

We thus have a maximum utilization of the channel equal to 0.3679, when on the average we transmit one packet per time slot. A similar result holds when there is a limited number of terminals and the number of packets per time slot is Binomially distributed. 2

3.5.4

Static derivation of the distributions of the Poisson process

As it is known from statistics, these distributions can also be derived from the Binomial process by letting the number of trials n (e.g. throws of a die) increase to innity and at the same time letting the probability of success in a single trial p converge to zero in such a way that the average number np is constant. This approach is static and does not stress the fundamental properties of the Poisson process which has a dynamic independent existence. But it shows the relationship between the two processes as illustrated in Table 3.1. The exponential distribution is the only continuous distribution with lack of memory, and the geometrical distribution is the only discrete distribution with lack of memory. For example, the next outcome of a throw of a die is independent of the previous outcome. The distributions of the two processes are shown in Table 3.1.

3.6

Properties of the Poisson process

In this section we shall show some fundamental properties of the Poisson process. From the physical model in Sec. 3.5 we have seen that the Poisson process is the most random

92

CHAPTER 3. ARRIVAL PROCESSES

BINOMIAL PROCESS Discrete time Probability of success:

p,

0<p<1

POISSON PROCESS Continuous time Intensity of succes:

>0

Number of attempts since previous success or since a random attempt to get a success GEOMETRIC DISTRIBUTION p(n) = p (1 p)n1 , m1 = 1 , p n = 1, 2, . . . 2 = 1p p2

Interval between two successes or from a random point until next success EXPONENTIAL DISTRIBUTION f (t) = et , m1 = 1 , t0 2 = 1 2

Number of attempts to get k successes PASCAL = NEGATIVE BINOMIAL DISTR. p(n | k) = m1 = k , p n1 k p (1 p)nk , n k k1 2 = k(1 p) p2

Time interval until kth success ERLANGK DISTRIBUTION fk (t) = m1 = (t)k1 et , (k 1)! 2 = k 2 t0

k ,

Number of successes in n attempts BINOMIAL DISTRIBUTION p(x | n) = m1 = p n , n x p (1 p)nx , x x = 0, 1, . . .

Number of successes in a time interval t POISSON DISTRIBUTION f (x, t) = (t)x t e , x! t0 2 = t

2 = p n (1p)

m1 = t ,

Table 3.1: Correspondence between the distributions of the Binomial process and the Poisson process. A success corresponds to an event or an arrival in a point process. Mean value = m1 , variance = 2 . For the geometric distribution we may start with a zero class. The mean value is then reduced by one whereas the variance is the same.

3.6. PROPERTIES OF THE POISSON PROCESS

93

point process that may be found (maximum disorder process). It yields a good description of physical processes when many dierent factors are behind the total process. In a Poisson process events occur at random during time and therefore call averages and time averages are identical. This is the so-called PASTAproperty: Poisson Arrivals See Time Averages.

3.6.1

Palms theorem (Superposition theorem)

The fundamental properties of the Poisson process among all other point processes were rst discussed by the Swede Conny Palm. He showed that the exponential distribution plays the same role for stochastic point processes (e.g. interarrival time distributions), where point processes are superposed, as the Normal distribution does when stochastic variables are added (the central limit theorem). Process 1
................................................................................................................................................................................................................................................ ................................................................................................................................................................................................................................................ .. .. .....

Process 2

..... .. ................................................................................................................................................................................................................................................ ................................................................................................................................................................................................................................................ ..

. . . Process N

. . .

. ................................................................................................................................................................................................................................................ ................................................................................................................................................................................................................................................ .. . . ...

Total process

..... .. ................................................................................................................................................................................................................................................. .............................................................................................. ................................................................................................................................................. . .. . . . .. . .. ... ... . .. . . . . . .

Time

Random point of time

Figure 3.7: By superposition of N independent point processes we obtain under certain assumptions a process which locally is a Poisson process. Theorem 3.1 Palms theorem: by superposition of many independent point processes the resulting total process will locally be a Poisson process. The term locally means that we consider time intervals which are so short that each process contributes at most with one event during this interval. This is a natural requirement since no process may dominate the total process (similar conditions are assumed for the central limit theorem). The theorem is valid only for simple point processes. If we consider a random point of time in a certain process, then the time until the next arrival is given by (2.32).

94

CHAPTER 3. ARRIVAL PROCESSES

We superpose N processes into one total process. By appropriate choice of the time unit the mean distance between arrivals in the total process is kept constant, independent of N . The time from a random point of time to the next event in the total process is then given by (2.32): N t . (3.43) p{T t} = 1 1 Vi N i=1 If all sub-processes are identical, we get: p{T t} = 1 1 V From (2.32) and (3.18) we nd (letting m1 = 1):
t0

t N

(3.44)

lim v(t) = 1 ,
t

and thus: V (t) =


0

1 dt = t .

(3.45)

Therefore, we get from (3.44) by letting the number of sub-processes increase to innity: p{T t} = lim 1 1 t N
N

= 1 et .

(3.46)

which is the exponential distribution. We have thus shown that by superposition of N identical processes we locally get a Poisson process. In a similar way we may superpose non-identical processes and locally obtain a Poisson process.
Example 3.6.1: Life-time of a route in an ad-hoc network A route in a network consists of a number of links connecting the end-points of the route (Chap. 8). In an ad-hoc network links exist for a limited time period. The life-time of a route is therefore the time until the rst link is disconnected. From Palms theorem we see that the life-time of the route tends to be exponentially distributed. 2

Corollary to Palms theorem (Poisson superposition theorem): By superposition of N independent Poisson processes we obtain a Poisson process. This is the only case we obtain an exact Poisson process. It can be proven (1) by remembering that the smallest of N exponential distributions is itself an exponential distribution (Example 2.2.7) (interval representation) or (2) by observing that the sum of N Poisson distributions is a Poisson distribution (number representation).

3.6. PROPERTIES OF THE POISSON PROCESS

95

3.6.2

Raikovs theorem (Decomposition theorem)

A similar theorem, the decomposition theorem, is valid when we split a point process into sub-processes, when this is done in a random way. If there are N times fewer events in a sub-process, then it is natural to reduce the time axes with a factor N . Theorem 3.2 Raikovs theorem: by a random decomposition of a point process into subprocesses, the individual sub-process converges to a Poisson process, when the probability that an event belongs to the sub-process tends to zero. This is also indicated by the following general result. If we generate a sub-process by random splitting of a point process choosing an event with probability pi , {i = 1, 2, . . . , N }, then the sub-process has the form factor i : i = 2 + pi ( 2) , (3.47)

where is the form factor of the original process. When pi approaches zero the form factor becomes 2 as for the exponential distribution. The result is only exact when the original process is a Poisson process: Corollary to Raikovs theorem (Poisson splitting theorem): By splitting a Poisson process into N sub-processes, each sub-process will be an independent Poisson processes. This can be shown both by interval representation and number representation. In addition to superposition and decomposition (merge and split, or join and fork), we can make another operation on a point process, namely translation (displacement) of the individual events. When this translation for every event is a random variable, independent of all other events, an arbitrary point process will converge to a Poisson process. As concerns point processes occurring in reallife, we may, according to the above, expect that they are Poisson processes when a suciently large number of independent conditions for having an event are fullled. This is why the Poisson process for example is a good description of the arrival processes to a local exchange which usually is generated by many independent local subscribers.

3.6.3

Uniform distribution a conditional property

In Sec. 3.5 we have seen that a uniform distribution in a very large interval corresponds to a Poisson process. The inverse property is also valid (proof left out):

96

CHAPTER 3. ARRIVAL PROCESSES

Theorem 3.3 If for a Poisson process we have n arrivals within an interval of duration t, then these arrivals are uniformly distributed within this interval. The length of this interval can itself be a random variable if it is independent of the Poisson process. This is for example the case in trac measurements with variable measuring intervals (Chap. 13). This can be shown both from the Poisson distribution (number representation) and from the exponential distribution (interval presentation).

3.7

Generalization of the stationary Poisson process

The Poisson process has been generalized in many ways. In this section we only consider the interrupted Poisson process, but further generalizations are MMPP (Markov Modulated Poisson Processes) and MAP (Markov Arrival Processes).

3.7.1

Interrupted Poisson process (IPP)

Due to its lack of memory the Poisson process is very easy to apply. In some cases, however, the Poisson process is not exible enough to describe a real arrival process as it has only one parameter. Kuczura (1973 [78]) proposed a generalization which has been widely used. The idea of generalisation comes from the overow problem (Fig. 3.8 & Sec. 6.4). Customers arriving at the system will rst try to be served by a primary system with limited capacity (n servers). If the primary system is busy, then the arriving customers will be served by the overow system. Arriving customers are routed to the overow system only when the primary system is busy. During the busy periods customers arrive at the overow system according to the Poisson process with intensity . During the non-busy periods no calls arrive to the overow system, i.e. the arrival intensity is zero. Thus we can consider the arrival process to the overow system as a Poisson process which is either on or o (Fig. 3.9). As a simplied model to describe these on (o ) intervals, Kuczura used exponentially distributed time intervals with intensity ( ). He showed that this corresponds to hyper-exponentially distributed interarrival times to the overow link, which are illustrated by a phasediagram in Fig 3.10. It can be shown that the parameters are related as follows: = p 1 + (1 p)2 , = 1 2 , + + = 1 + 2 . Because a hyperexponential distribution with two phases can be transformed into a Cox2 distribution (Sec. 2.3.3), the IPP arrival process is a Cox-2 arrival processes as shown in (3.48)

3.7. GENERALIZATION OF THE STATIONARY POISSON PROCESS

97

Channels

Total traffic [erlang]

Overflow link

Primary link

ON OFF
Figure 3.8: Overow system with Poisson arrival process (intensity ). Normally, calls arrive to the primary group. During periods when all n trunks in the primary group are busy, all calls are oered to the overow group.

98

CHAPTER 3. ARRIVAL PROCESSES


.......... ........... ... ... .. .. .. . .. . . . . . ......... .. . .. .. . .. . .. . .. . . ......................................................................... ............... . .......................................................................... .................... . . . . . .. .. .. . .. . .. .. ... .. .. ... . . .. ........... . .. .. ......... . . . . .. .. .. .. .. .. ... . .... . .. . .. . . .. .. . . .. . . . .. . . . .. . .. . . . . . . . .. . . .. . . . . . .. . .. . . . . . . . .. . .. . . . ................................................................... .................................................................. .... . . . . . . . . .. . . . . .. . . .. . . . .. . .. . . . . . .. .. . . . . .. . . .. . . . . .. .. . . . . .. . .. . . . .. .. .. ... .. .. .. ... .. .. ... .. .. .. .. .. .. . . .. ......... .. ..... ....... ... .. .. .. .. .. .. .. .. .. . .. . . .. . .. .. . .................... . . . . ................... . ......................................................................... .... . . .. . . . . . .......................................................................... . . . . .. .. .. .. ... . ... ... ........... ........

on

IPP arrival process

Poisson process

Switch

Arrivals ignored

Figure 3.9: Illustration of the interrupted Poisson process (IPP) (cf. Fig. (3.8)). The position of the switch is controlled by a two-state Markov process.
.. . .. .................................. .................................. .. .. . ... ... ... ... ... ... ... ... . . ... ... ... ... ... ... ... ... ... ... .. ... ... ... ... ... .................................... ................................... .. . .. .. .... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .... . ................................... ................................. .. .

1p

................................... ............................... .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. . ................................. . .. . .................................... .... ... ... ... ... ... . . ... ... . . ... ... ... ... ... ... ... ... ... ... .. .. ... ... ... ... .................................. ..................................

Figure 3.10: The interrupted Poisson process is equivalent to a hyperexponential arrival process (3.48). Fig. 2.13. We have three parameters available, whereas the Poisson process has only one parameter. This makes it more exible for modelling empirical data.

3.7.2

Batched Poisson process

We consider an arrival process where events occur according to a Poisson process with rate . At each event a batch of calls (packets, jobs) arrive simultaneously. The distribution of the batch size is in the general case a discrete distribution p(i) , (i = 1, 2, . . .). The batch size is at least one. In the Poisson arrival process the batch size is always one. We choose the simplest case where the distribution is a geometric distribution (Tab. 3.1, p. 92): p(i) = p (1 p)i1 , m1 = 2 = 1 , p 1p . p2 i = 1, 2, . . . . (3.49) (3.50) (3.51)

The number of events during a time interval t then becomes a stochastic sum (Sec. 2.3.3) where N (2.76) is a Poisson distribution with mean value and variance t and T (2.77) is the

3.7. GENERALIZATION OF THE STATIONARY POISSON PROCESS

99

geometric distribution given above. The mean value of the number of events during a time interval t is (2.82): 1 m1,s = t , (3.52) p and the variance is (2.84):
2 s

1p = t 2 + p = t 2p . p2

1 p

(3.53)

The index of dispersion of counts (3.11) becomes: IDC =


2 2p s = . m1,s p

(3.54)

For p = 1 the geometric distribution always takes the value one and we get a Poisson process. For p < 1 the process is more bursty than the Poisson process.

100

CHAPTER 3. ARRIVAL PROCESSES

Chapter 4 Erlangs loss system and Bformula


In this and the following chapters we consider the classical teletrac theory developed by Erlang (Denmark), Engset (Norway) and Fry & Molina (USA). It has successfully been applied for more than 80 years. In this chapter we consider the fundamental Erlang-B formula. In Sec. 4.1 we specify the assumptions for the model. Sec. 4.2 deals with innite capacity, which results in a Poisson distributed number of busy channels. In Sec. 4.3 we consider a limited number of channels and obtain the truncated Poisson distribution and Erlangs Bformula. Sec. 4.4 describes a standard procedure for dealing with state transition diagrams (STD) which are the key to classical teletrac theory. We also derive an accurate recursive formula for numerical evaluation of Erlangs B-formula (Sec. 4.5). In Sec. 4.6 properties of Erlangs B-formula are studied. Thus we consider non-integral number of channels, insensitivity, derivatives, inverse formul, and approximations. Sec. 4.7 considers the Blocked Calls Held model, which is useful for many applications. Finally, in Sec. 4.8 we study the basic principles of dimensioning, where we balance GradeofService (GoS) against costs of the system.

4.1

Introduction

Erlangs B-formula is based on the following model, described by the three elements structure, strategy, and trac (Fig. 1.1): a. Structure: We consider a system of n identical channels (servers, trunks, slots) working in parallel. This is called a homogeneous group. b. Strategy: A call arriving at the system is accepted for service if at least one channel is idle. Every call needs one and only one channel. We say the group has full accessibility. Often the term full availability is used, but this terminology will only be used in connection with reliability and dependability. If all channels are busy the system

102

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA is congested and call attempts are blocked. A blocked (= rejected, lost) call attempt disappears without any after-eect as it may be accepted by an alternative route. This strategy is the most important one and has been applied with success for many years. This is called Erlangs loss model or the Blocked Calls Cleared = BCCmodel. Usually, we assume that the service time is independent of both the arrival process and other service times. Within a full accessible group we may look for an idle channel in dierent ways: Random hunting: we choose a random channel among the idle channels. On average every channel will carry the same trac. Ordered hunting: the channels are numbered 1, 2, . . . n, and we search for an idle channel in this order, always starting with channel one (ordered hunting with homing). This is also called sequential hunting. A channel will on the average carry more trac than the following channels. Cyclic hunting: this is similar to ordered hunting, but without homing. We continue hunting for an idle channel starting from the position where we ended last time. Also in this case every channel will on the average carry the same trac. The hunting takes place momentarily. If all channels are busy a call attempts is blocked. The blocking probability is independent of the hunting mode.

c. Trac: In the following we assume that: The arrival process is a Poisson process with rate , and The service times are exponentially distributed with intensity (corresponding to a mean value 1/). This type of trac is called Pure Chance Trac type One, PCT-I. The trac process then becomes a pure birth and death process, a simple Markov process which is easy to deal with mathematically. Denition of oered trac: We dene the oered trac as the trac carried when the number of channels is innite (1.2). In Erlangs loss model with Poisson arrival process this denition of oered trac is equivalent to the average number of call attempts per mean holding time: 1 (4.1) A= = . Scenarios: We consider two cases: 1. n = : Poisson distribution (Sec. 4.2), 2. n < : Truncated Poisson distribution (Sec. 4.3). Insensitivity: We shall later see that this model is insensitive to the holding time distribution, i.e. only the mean holding time is of importance for the state probabilities. The type of distribution has no importance for the state probabilities.

4.2. POISSON DISTRIBUTION

103

Performancemeasures: The most important grade-of-service measures for loss systems are time congestion E, call congestion B, and trac (load) congestion C as described in Sec. 1.9. They are identical for Erlangs loss model because of the Poisson arrival process (PASTA property: Poisson Arrivals See Time Averages).

4.2

Poisson distribution

We assume the arrival process is a Poisson process and that the holding times are exponentially distributed, i.e. we consider PCT-I trac. The number of channels is assumed to be innite, so we never observe congestion (blocking).

4.2.1

State transition diagram


........... ........... ............. ............... .... .... ..... ..... .... ... ... ........ ...... ... .... .. . ... ........... .... .............. ............... ..... .... .. .... .. ... . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. ... .. .. ... ... . .. . . ... ... . . ................ ............... .... ... .......... ...... ... ... ......... ...... . ... .... ................. ... ................. ............. ................ . .

...... ...... ............... ............... ............... . .................. ..... ........ ..... ......... ... ....... ..... ... ....... ..... ... .... . ... .... .... ........... ..... .... ........... ..... ... .. . .. .. . .. .. ... . .... ... .. .. . . ... .. . . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. ... .. .. ... . .. .. .... . . ... . ... . .... ... . ... . .... ... .......... ...... ... .......... ..... ... ... ......... ...... ... ......... ..... .. ... .... .. .. ... ................. ... ... ............. .............. ................. ............. ................ .

i1

(i1)

(i+1)

Figure 4.1: The Poisson distribution. State transition diagram for a system with innitely many channels, Poisson arrival process (), and exponentially distributed holding times (). We dene the state of the system, [ i ], as the number of busy channels i (i = 0, 1, 2, . . .). In Fig. 4.1 all states of the system are shown as circles, and the rates by which the trac process changes from one state to another state are shown upon the arcs of arrows between the states. As the process is simple (Sec. 3.2.3), we only have transitions to neighboring states. If we assume the system is in statistical equilibrium, then the system will be in state [ i ] the proportion of time p(i), where p(i) is the probability of observing the system in state [ i ] at a random point of time, i.e. a time average. When the process is in state [ i ] it will jump to state [ i+1] times per time unit and to state [ i1] i times per time unit. Of course, the process will leave state [ i ] at the moment there is a state transition. When i channels are busy, each channel will terminate calls with rate so that the total service rate is i (Palms theorem 3.1). The future development of the trac process only depends upon the present state, not upon how the process came to this state (the Markov-property). The equations describing the states of the system under the assumption of statistical equilibrium can be set up in two ways, which both are based on the principle of global balance: a. Node equations In statistical equilibrium the number of transitions per time unit into state [ i ] equals

104

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA the number of transitions out of state [ i ]. The equilibrium state probability p(i) denotes the proportion of time (total time per time unit) the process spends in state [ i ]. The average number of jumps from state [ 0 ] to state [ 1 ] is p(0) per time unit, and the average number of jumps from state [ 1 ] to state [ 0 ] is p(1) per time unit. Thus we have for state i = 0: p(0) = p(1) , i = 0. (4.2)

For state i > 0 we get the following equilibrium or balance equation: p(i1) + (i + 1) p(i+1) = ( + i ) p(i) , i > 0. (4.3)

Node equations are always applicable, also for state transition diagrams more dimensions, which we will consider in later chapters.

b. Cut equations In many cases we may exploit a simple structure of the state transition diagram. If for example we put a ctitious cut between the states [ i 1 ] and [ i ] (corresponding to a global cut around the states [ 0 ], [ 1 ], . . . [ i1] ), then in statistical equilibrium the trac process changes from state [ i1 ] to [ i ] the same number of times as it changes from state [ i ] to [ i1 ]. In statistical equilibrium we thus have per time unit: p(i1) = i p(i) , i = 1, 2, . . . . (4.4)

Cut equations are easy to apply for one-dimensional state transition diagrams, whereas node equations are applicable for any diagram.

As the system always will be in some state, we have the normalization restriction:
i=0

p(i) = 1 ,

p(i) 0 .

(4.5)

We notice that node equations (4.3) involve three state probabilities, whereas cut equations (4.4) only involve two. Therefore, it is easier to solve the cut equations. Loss system will always be able to enter statistical equilibrium because we have a limited number of states. We do not specify the mathematical conditions for statistical equilibrium in this chapter.

4.2. POISSON DISTRIBUTION

105

4.2.2

Derivation of state probabilities

For one-dimensional state transition diagrams the application of cut equations is the most appropriate approach. From Fig. 4.1 we get the following balance equations: p(0) = p(1) , p(1) = 2 p(2) , ... ... p(i2) = (i 1) p(i1) , p(i1) = i p(i) , p(i) = (i + 1) p(i+1) , ... ... . Expressing all state probabilities by p(0) and introducing the oered trac A = / we get: p(0) p(1) p(2) ... A p(1) = 2 ... = = = p(0) , A p(0) , A2 p(0) , 2 ...

p(i1) = p(i)

A = p(i1) i

A Ai1 p(i2) = p(0) , i1 (i 1)! = Ai p(0) , i!

A p(i+1) = p(i) i+1 ... ... The normalization constraint (4.5) implies: 1 =
j=0

Ai+1 = p(0) , (i + 1)! ...

p(j)

A2 Ai = p(0) 1 + A + + + + ... 2! i! = p(0) eA , p(0) = eA .

106

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

Thus the state probabilities become Poisson distributed: p(i) = Ai A e , i! i = 0, 1, 2, . . . . (4.6)

The number of busy channels at a random point of time is thus Poisson distributed with both mean value (3.37) and variance (3.38) equal to the oered trac A. We have earlier shown that the number of calls in a xed time interval also is Poisson distributed (3.36). Thus the Poisson distribution is valid both in time and in space. We would, of course, obtain the same solution by using node equations.

4.2.3

Trac characteristics of the Poisson distribution

From a dimensioning point of view, the system with unlimited capacity is of little interest in practise. The trac characteristics of this system become: Time congestion: Call congestion: Carried trac: Lost trac: Trac congestion: E = 0, B = 0, Y A =
i=1

i p(i) = A ,

C = 0.

= A Y = 0,

Only ordered hunting makes sense in this case, and trac carried by the ith channel is later given in (4.14). Peakedness Z is dened as the ratio between variance and mean value of the distribution of state probabilities (cf. IDC, Index of Dispersion of Counts (3.11)). For the Poisson distribution we nd (3.37) & (3.38): 2 A Z= = = 1. (4.7) m1 A The peakedness has dimension [number of channels] and is dierent from the coecient of variation which has no dimension (2.12). Duration of state [ i ]: In state [ i ] the process has the total intensity ( + i ) away from the state. Therefore, the time until the rst transition (state transition to either [ i + 1 ] or [ i 1 ]) is exponentially distributed (Sec. 2.2.7): fi (t) = ( + i )e( + i ) t , t 0.

4.3. TRUNCATED POISSON DISTRIBUTION

107

Example 4.2.1: Simple Aloha protocol In example 3.5.2 we considered the slotted Aloha protocol, where the time axes was divided into time slots. We now consider the same protocol in continuous time. We assume that packets arrive according to a Poisson process and that they are of constant length h. The system corresponds to the trac case resulting in a Poisson distribution which can be shown to be valid also for constant holding times. The state probabilities are Poisson distributed (4.6) with A = h. A packet is only transmitted correctly if: a: the system is in state [ 0 ] at the arrival time, and b: no other packets arrive during the service time h. We nd: pcorrect = p(0) e h = e2 A . The trac transmitted correctly thus becomes: Acorrect = A pcorrect = A e2 A . This is the proportion of the time axis which is utilized eciently. It has an optimum for h = A = 1/2, where the derivative with respect to A equals zero: Acorrect = e2 A (1 2 A) , A max{Acorrect } = 1 = 0.1839 . 2e (4.8)

We thus obtain a maximum utilization equal to 0.1839 when we oer 0.5 erlang. This is half the value we obtained for a slotted system by synchronizing the satellite transmitters. The models are compared in Fig. 3.6. 2

4.3

Truncated Poisson distribution

We still assume Pure Chance Trac Type I (PCT-I) as in Sec. 4.2. The number of channels is now limited so that n is nite. The number of states becomes n+1, and the state transition diagram is shown in Fig. 4.2.
.......... ........... ............ ................. ... ... . ..... .... ..... ... ... ........ ..... .. .... .... . .. .... ... ........... .... .............. ..... .... .... ....... .. .... .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . .. .. ... . ... .. .. . ........... ........ ................ ....... ....... .................... .... .... . .. .. ... .... .... ................. ................. ................ ................ . .

.... .... ............. ............. .............. .................. ...... ......... ...... ......... ... ...... ...... ... ...... ...... ... .... ... ... . ... . ... .... .... ........... ..... .... ........... ..... .... . ..... .. ... ... ... .. . .. ... . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . .. .. .. .. . . . ... ... . .... ... .... .. ... . .... .. ........... ..... .. ........... ..... .. .... .. .......... ..... .. .......... ..... .. .. ... .. . .... .. ... ... ... ... .............. .............. .............. .............. ................. ................ . .

i1

.............. .................. ... ...... .... ... ... . .... ........... .... ... ... . .. ... .. .. .. .. . . . . . . . . . . . . . . . . . . .. .. .. .. . .... .... ........... .... ................ .... . .. .... ................. ................ .

(i1)

(i+1)

Figure 4.2: The truncated Poisson distribution. State transition diagram for a system with a limited number of channels (n), Poisson arrival process (), and exponential service times ().

108

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

4.3.1

State probabilities

We get similar cut equations as for the Poisson case, but the state space is limited to {0, 1, . . . , n} and the normalization condition (4.5) now becomes:
n

p(0) =
j=0

Aj j!

We get the so-called truncated Poisson distribution: Ai p(i) = n i! j , A j! j=0 0 i n. (4.9)

The name truncated means cut-o and is due to the fact that the solution may be interpreted as a conditional Poisson distribution p(i | i n). This is seen by multiplying both numerator and denominator by eA . It is not a trivial fact that we are allowed to truncate the Poisson distribution, so that the relative ratios between the state probabilities are unchanged.

4.3.2

Trac characteristics of Erlangs B-formula

Knowing the state probabilities, we are able to nd all performance measures dened by state probabilities. Time congestion: The probability that all n channels are busy at a random point of time is equal to the proportion of time all channels are busy (time average). This is obtained from (4.9) for i = n: An n! En (A) = p(n) = . (4.10) 2 A An 1+A+ + + 2! n! This is Erlangs famous B-formula (1917, [12]). It is denoted by En (A) = E1,n (A), where index one refers to the alternative name Erlangs rst formula. Call congestion: The probability that a random call attempt will be lost is equal to the proportion of call attempts blocked. If we consider one time unit, we nd by summation over all possible states: p(n) Bn (A) = n = p(n) = En (A) . (4.11)
=0

p()

4.3. TRUNCATED POISSON DISTRIBUTION

109

The denominator is the average number of call attempts per time unit, and the numerator is the average number of blocked calls per time unit. Carried trac: If we use the cut equation between states [ i1 ] and [ i ] we get:
n n

Yn (A) =
i=1

i p(i) =

i=1

p(i1) = A {1 p(n)} , (4.12)

Yn (A) = A {1 En (A)} , where A is the oered trac. The carried trac will be less than both A and n. Lost trac: Trac congestion: A = A Yn (A) = A En (A) , Cn (A) = 0 A < .

AY = En (A) , 0 Y < n. A We thus have E = B = C because the arrival intensity is independent of the state. This is called the PASTAproperty, Poisson Arrivals See Time Averages, which is valid for all systems with Poisson arrival processes. In all other cases at least two of the three congestion measures will be dierent. Erlangs B-formula is shown graphically in Fig. 4.3 for some selected values of the parameters. Trac carried by the ith channel (utilization yi of channel i) : 1. Random hunting and cyclic hunting: In this case all channels on the average carry the same trac. The total carried trac is independent of the hunting strategy and we nd the utilization: Y A {1 En (A)} yi = y = = . (4.13) n n This function is shown in Fig. 4.4. We observe that for a given congestion E we obtain the highest utilization for large channel groups (economy of scale). 2. Ordered hunting = sequential hunting: The trac carried by channel i is the dierence between the trac lost from i1 channels and the trac lost from i channels: yi = A {Ei1 (A) Ei (A)} . (4.14)

It should be noticed that the trac carried by channel i is independent of the number of channels after i in the hunting order. Thus channels after channel i have no inuence upon the trac carried by channel i. There is no feed-back. As the total carried trac is independent of the hunting mode we have:
n

Y =
i=1

yi .

110 Improvement function:

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

This denotes the increase in carried trac when the number of channels is increased by one from n to n + 1: Fn (A) = Yn+1 Yn , = A{1 En+1 } A{1 En } , Fn (A) = A {En (A) En+1 (A)} . (4.15) (4.16)

We have that 0 Fn (A) < 1 , as one channel at most can carry one erlang. The improvement function Fn (A) is tabulated in Moes Principle (Arne Jensen, 1950 [58]) and shown in Fig. 4.5. In Sec. 4.8.2 we consider the application of this principle for optimal economic dimensioning. Peakedness: This is dened as the ratio between the variance and the mean value of the distribution of the number of busy channels, cf. IDC (3.11). For the truncated Poisson distribution it can be shown that: Z = Zn (A) = 2 = 1 A {En1 (A) En (A)} = 1 yn , m (4.17)

where we have used (4.14). The dimension is [channels]. In a group with ordered hunting we may thus estimate the peakedness from observation of the trac carried by the last channel. Duration of state [ i ]: The total intensity for leaving state [ i ] is equal to ( + i ), and therefore the duration of the time in state [ i ] (sojourn time) is exponentially distributed with probability density function (pdf): fi (t) = ( + i ) e( + i ) t , fn (t) = (n ) e(n ) t , 0 i < n, i = n. (4.18)

The fundamental assumption for the validity of Erlangs B-formula is the Poisson arrival process. According to Palms theorem this is fullled in ordinary telephone systems with many independent subscribers. As the state probabilities are independent of the holding time distribution, the model is very robust. The combined arrival process and service time process are described by a single parameter A. This explains the wide application of the B-formula both in the past and today.

4.3. TRUNCATED POISSON DISTRIBUTION

111

Blocking probability E (A) 1.0


....................... ........................... ............................... .............................. ........................ ........................ ................... .................. ....... ........ ................ ............. ............ ... .... ......... .......... .................... .................... ... ...... .......... ................. ................. ........ ........ ............... .............. ....... ....... ............. ............. ...... ...... ........... ........... ...... .......... ...... .......... ... ... ........ ......... ..... ..... ........ ........ .... .... ........ ........ .... ....... ....... .... .. .... .... ....... .... ... ...... ...... ... . .... .... ... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... .. .... .... .... ...... ... .... ......... ......... .... ... .. . .. . ........ ........ .. .... .... .. ........ ........ .. .... ....... .. .... ....... . ... .... .. .. ... ....... .. ....... ... ..... ...... ... .. ... .. ...... ...... ... ... .. ...... ...... .. ... ... ..... ..... . .. ... ... ..... ..... .. ... ..... .. ..... ... ... .. .... . ... .. .. ..... ..... .. . .. ..... ..... .. .. . .... .... . . .. . .. .. .... . .. .... .. .... .. . . ... . .. .... . .. .... . .. . .... .... .. . . .... .... . .. .. . ... .... . .. . ... .. ... . .. . .. . .. ... .. . ... . ... . . ... . .. .. ... . ... ... . . ..... ..... . .. ... ... . ..... ..... . . ... . ... .. . . ... . ..... . . . ... ..... ... .. ..... . . .. ... .... ..... . . . .. .... ... .... . ... . . .... .. .... . ... ... . . .... . .... . . ... . ... .... .... . . . . ... .... . . ... .... . . . . . . . ... .... .... . .. . . . .. .... . . .. .... . . .. .... . . ... .. . . .. . . .... .. .... . . . . .. .... .. . .... . . . . .. ... . . .. .. . . . .. .... . . . . .. ... .. ... . . . . ... . ... . .. .. . . ... ... . . . . .. ... . ... .. . . .. . . .. . . ... .. . ... . . . ... . . .. ... . .. . . . ... ... . . .. . . .. ... ... . . . . .. ... .. . ... . . . ... . . .. ... . .. . . ... . ... . . . . .. . ... ... . . . . .. .. ... . . ... . . . . . ... .. ... . . . . . . ... . .. .. . . ... . .. . . .. . . .. ... ... . . . . . ... .. . ... . . . . . . ... ... .. . . . . . .. . . .. . . .. ... .. . . . . . .. . . .. ... . . . . . .. .. . . ... . . . . .. .. . . .. .. . . . . . .. .. .. . . . . . .. . . . .. . . . .. . . . . . .. . .. . .. ... ... . . . . . . .. ... ... . . . .. . . . . ... ... . . . . . . .. .. . .. .... . . . . . . .. ... . . .. ... . . . . .. . ... . . . . . .. . ... ... .. . . . . . . ... .. ... . . . .. . . . .. .... . . . .. . . .. . ... ... . . . . . . .. ... . . .. ... . . . . .. . . . . .. . . ... . .. ... . . . . ... .. . . ... . .. . . . ... . . ... . . . .. . . .. . ... ... . . . . . . .. ... . . . .. ... . . . . . . . .. . ... . .. ... . . . . . . . ... . .. ... . . .. . . ... . . ... . . . .. . . .. . ... ... . . . . . .. . ... . .. ... . . . . . . . . . .. . ... . .. . ... . . . . ... . . . .. ... .. . . . . . . ... ... . . . . .. . .. . . ... ... . . . .. . . . ... .. ... . . . . . . . . . .. . . ... .. . ... . . . . . ... . . . ... .. . . . .. . . . ... ... . . . .. . . . .. ... ... . . . . . . .. ... .. ... . . . . . . . . . . . .. ... . .. ... . . . . . . ... . . .. ... . . . .. . . . ... ... . .. . . . .. . . ... ... . . . . . . .. ... .. ... . . . . .. . . . .. . . . . . ... . . ... .. . . .. . . . ... ... .. . . .. .. ... . . ... . . . .. . . .. ... .... .. . . . . . .. .... .. .. .... . . . .. . ... .. . ... ... .. .. .... .... .. ... .. . ... .... .... .. .. .. ... ... ..... ..... . .. .. .. ... .... .. ... .. ...... .. .. .. .... .... ....... ... .. ....... . . . .... . . ......................................................................................................... .... . ....... .... ......... ....................................................................................................... .

n 1 2

0.8

10

0.6

0.4

20

0.2

0.0

12

16

20

24 28 Oered trac A

Figure 4.3: Blocking probability En (A) as a function of the oered trac A for various values of the number of channels n (4.9).

112

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

Utilization y 1.0
.. ..... ...................................... ..................................... ............................. ............................ ..................... ...................... .................. ................. .............. .............. ........... ............ ... ...... .......... ......... ........ ........ ....... .. .... ....... ...... ...... . .... ..... ................ ................ ..... ..... ............... ............... .... .... ............. ............. .... .... ............ ............ .... ..... ..... .... ........... ... .......... ... .......... . .. .. .. ......... ......... ... ... ........ ........ ... ... ........ ........ ... ....... ....... ... ... ... .. .. ....... ....... ........ ... ............ .. ...... .. ...... ........... ........... . . ..... ..... .......... .......... .. .. ...... .......... ...... .......... .. ..... .. ......... ..... ......... . ... ... ......... .. ......... .. ..... ..... ... .... ........ .. ..... ..... ........ .. ........ .... .... ....... . ....... .. . .... .... ... ... ....... .......... .......... .. .... ...... .. .... ...... .. .. .......... .......... .... . .... ...... ...... ......... .. .......... .. . .... ...... ..... ......... ......... . .. . .. .. ... ........ .... ...... ........ . . ... . ........ ..... ... ........ ..... . . .... .... ... ....... . ....... ... .. ..... ..... ... ... . ....... ... ... . .... . ..... ....... ....... ... ... . .... ...... .... ...... . ... . ... .... .. ... .... ...... . . . ... ... .... ...... .... . ... ...... ......... ... ... . ......... .... . ... .... ... ..... ......... . ..... ......... . .... .... ..... ... ........ ..... . ........ . ... .... .. ..... ........ .... ..... ........ . . .. ... ..... ........ .. . .... ....... ..... . . .. . .. ... .... ....... . ....... .. . ... .. ... ..... ..... ... ... ....... . . ... ... .. ..... ... .... .... . .. ...... ....... . .... .... ... ........ ... .... .... ...... . ...... .. . .. ........ ........ ... .... ... .. ... .... ...... . ........ . ........ .. ... .... ...... ... .... .. ...... . . . . .. ... .. ....... ....... . . .. ... .... ...... .. . ....... ... ...... ....... .... . . .. . ... .... ...... .. ...... . .. ... . .... .. .... ..... ..... .. .... ....... . ... . . .... ..... ..... ...... .... ...... .. ... . ... . ..... ..... ...... .... ...... . ... .. ... . .. . ... ... ..... ...... ... ...... . ... . . .. ... . .. ...... ..... .. ...... ..... ... . . . ... ... . .... ... .. . . ... . .. ... .... . .... .... .. ...... . .. ... . . ... .... ..... .... ..... . .. .. .. . ... . ... .... ..... .... ..... . . .. . . .. ... .... . ... ... .... ..... . . . . .. ... ..... . .... . ..... .. ... .... . . . . .. ... .. .... . . .. . .... .. . ... .... . ... ... ..... .... . . ...... . . ... .. ... .... .. . .... ....... . .. ... ....... . . .. . .... ... ....... ..... ....... . ... . .. ... .. . . .... . .. .... ....... .... ...... .. . . . .. . .. ... .... ....... .. .. ... .... . ....... . . . . .. .. . . ..... ...... . . . .. ... .. . ... .. .... . .... . .... ...... . . ... .. ... . .. . .... .. ...... ...... .... .. . . . .. ... . ... ...... .... ...... ... . . .. . . .. .. ... .. ...... ... ...... . .. . . .... . . . .. ... ... . ...... . ... .. .. ...... ... . . .. . .. .... ..... . . .. . ... ... . .. . .. ... ... . ... ..... . . ... . . . ... . ... .. ... ..... ..... . . .. .. .. .... ....... . . ... ... . ... ..... ... ..... . . . .. ....... . ....... . .. . ... . ... ... ... ..... . . ... . ....... . ....... . . .. ... .. ..... . . ... ..... . .. .. . . .. . .. ....... . ....... . . . . . . ... ... .... . . ...... .. ..... ... ... . ...... . . . ... ..... .... ..... ... . . .. . .. ... .. . . . .. ...... . ... ...... ..... . ... . . .. ... . .. . . . .... ...... . ...... . .. .... . ... .. ... . . . .. .... ...... . .... . ...... . .. . . . .. .. ... .. ... . .. .... ...... . ...... . . . . . . .. ... .... ..... . . .. . .. ..... ... .... . . . . . . .... .... . . . . .... .. .. .... . .. . . .. ..... ..... ... . . . .... .... . . . .. .. ..... .. ..... .. . . .. . ... .... . . .... . ..... ..... . . . . .. .. . . .. . .. .... .... . ... ... ..... . . . . . . . . .. .. .... ..... . . . . .... ..... .. .. . . . ... .. .... . . . ... ..... . .. .. . .. . .. . ... .... ... . . .... . . .. . . . . .. .. ... .... . . . . ... .... .. . . .. . ... .... . ... . .... . .. . . . .. .. .. . .. ... .... ... .... . . . . . . . . . .. ... .... . .. ... . .. . .... . . . . .. . . . . . . . . . .. .... ... . .. . .... .. . . ... . . . . .... ... . . . .... . .. .. ... . . . .. .. . .... . ... . . . .... ... . . . .. . . .. .. . . . . ... .... .... ... . . . . . . . . . .. ... .... . .. . .. .... . ... . . . .. . . . . . . . . .. .. ... . . ... . . .. .... .. .... . . . . ... . . . ... . ... . .. .... . . . .. . .. . . . . . ... .. . ... .... . . . . . . . . . . .. .. .. . .. . . .... ... . . . . . . . .. ... ... . . . .. ... . .. .. . . . .. ... . . . . ... . ... . . .. . . .. ... ... .. . . . . .. ... . . . . . ... ... . . . .. .. ... . . .. .. . . . . .. ... ... . . . . . . . . .. .. .. . .. ... ... . . . . . . . . . .. ... . . . .. .. .. .. ... . . . . . . . . . . . . .. . . .. ... . . . . . ... . . . . . .. .. .. ... . . . . . .. ... . . . . . .. . . . . ... .. . ... . . . . . . . . . .. . .. .. ... ... .. . . . . . . . . . . .. .. ... . . . . ... .. . .. . . . . . .. ... . . . . . . .. . . . . .. ... . . . . . . .. .. ... . . . . .. . . . . .. .. .. . ... ... . . . . . . . . . .. .. . . . .. . . ... ... . . . . .. . . . . . . .. .. ... .. ... . . . . .. .. .. . . . . .. .. . . . . . . .. . . . . . . ... ... . . . . . . . . . . . .. .. . . . . . .. ... ... . . . . . . . . . . . . .. . . . . . . .. ... ... . . . . . . . . . . . . .. ... . . . . .. .. .. ... . . . . .. . . . . . .. . . . . . . .. ... .. . . . . . . . . . . . . .. .. . . . . . .. .. ... . .. . . . ... . .. . . . .. . . . . .. .. .. ... .. . ... . ... . . .. ... .. . . . . .. .. ... . . . .. . .. . .. .... . . ... .. ... . .. . . . . .. . . . .. .... . . .. .. . .. .... ..... . .. . . .. . . . . . .. .... . .. . . . . .. . . . . . . . .... .... . .... . . . ...... ..... ...... . . . .. ... . ... .. . ....... ............ . . .. . . .. ................

E 0.5 0.2 0.1 0.05 0.02 0.01

0.8

0.6

0.001 0.0001

0.4

0.2

0.0

12

16

20 24 28 Number of channels n

Figure 4.4: The average utilization per channel y (4.13) as a function of the number of channels n for given values of the congestion E.

4.3. TRUNCATED POISSON DISTRIBUTION

113

Improvement function F1,n(A) = yn+1 1.0


............ ............ .......... .......... ......... ......... ........ ........ ........ ....... ....... ....... ...... ...... ...... .... ...... .... .... ..... .... ..... .... ..... ..... .... ..... .... ..... .... ..... ..... .... .... .... .... ... ... .... .... ... ... .... ... .... ... .... ... .... ... .... ... .... ... ... .... ... ... ... ... ... ... .. .. ... ... ... ... .. .. ... ... ... .. .. ... ... .. .. .. ... ... .. .. .. .. ... ... .. .. .. .. ... ... .. .. ... .. ... .. .. ... .. ... .. .. .. ... ... .. .. ... .. .. ... .. ... .. ... .. .. .. ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. . .. . .. .. .. .. .. .. .. .. . .. . .. .. .. .. .. .. .. .. .. . . . . .. . .. . .. . .. . . . . .. . .. . .. . . .. . . .. . . .. . .. . . .. . . .. . . .. . .. . . .. . . .. . . .. . . .. . . .. .. . . .. . . . . .. . . .. .. . .. . . . . .. . .. . .. . .. . . . . .. .. . . .. .. . . . . . . .. .. . .. . . . . . . .. .. . . . .. . . . .. . . . .. . . .. . . .. . . . . .. . . .. . . . . . . . .. .. . . .. .. . . . . . . . .. . . . .. . . . . . . .. .. . . .. .. . . . . . . .. . . . .. . . . . . . . .. . . . .. . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . .. . . . .. . . . . . . . . . . . .. . . . .. . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . .. . . . . .. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . .. . . . .. . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . .. . . . .. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . .. . . . .. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . .. . . . . .. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . .. . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . .. . . .. . . . . .. . . . .. . . . .. . . . .. . . . . . . . . . .. . . . . .. . . . . . . .. . . . .. .. . . . .. . . . . . . . . .. .. . . .. . .. . . . . . . . . .. .. . . .. . . . . . . .. . . . .. . . . . .. . . . .. . . . . . . . .. .. . . . .. . . . .. . . . .. . . . .. . . .. . . . .. . .. . . . .. . . . . . .. . . .. . .. . . . .. . . . . .. . . . .. .. . . .. . . . . .. . .. . .. . .. .. .. . . . . . . . .. .. . . .. . .. . .. . . .. . .. . .. . .. .. . . . . . .. .. . . .. .. .. . . .. . . .. .. . . .. . .. . .. . .. . . . . . .. .. .. . . .. . . . . .. .. . . .. .. . .. . . . .. .. . . .. .. . . .. . .. .. .. .. . .. . . .. .. .. . .. .. . ... .. . ... . .. . .. .. .. .. .. ... .. ... .. .. .. .. ... .. ... .. .. .. .. ... .. ... ... .. ... ... .. .. .. .... ... ... .... .. ... .. ... .... ... ... .... ... ... ... ... ..... .... .... ..... .... ... .... ...... .... ..... ..... ....... ..... ...... .. . .. . . .. . . . . ... ....... . . . .. ................................................................................................................................................................................................................................................... .... .. ... ............ ......... ......................................................................................................................................................................................................................... . ....... ........... .... . .... . ........... . .. . . ..

0.8

20

0.6

10

0.4

0.2

0.0

12

16

20 24 28 Number of channels n

Figure 4.5: Improvement function Fn (A) (4.16) of Erlangs Bformula. By sequential hunting Fn (A) equals the trac yn+1 carried on channel number (n + 1).

114

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

4.4

General procedure for state transition diagrams

The most important tool in teletrac theory is formulation and solution of models by means of state transition diagrams. From the previous sections we identify the following standard procedure for dealing with state transition diagrams. It consists of a number of steps and is formulated in general terms. The procedure is also applicable for multi-dimensional state transition diagrams, which we consider later. We always go through the following steps: a. Construction of the state transition diagram. Dene the states of the system in an unique way, Draw the states as circles, Consider the states one at a time and draw all possible arrows for transitions away from the state due to: (a) the arrival process (new arrival or phase shift in the arrival process), (b) the departure (service) process (service time termination or phase shift). In this way we obtain the complete state transition diagram. b. Set up the equations describing the system in equilibrium. If the conditions for statistical equilibrium are fullled, the steady state equations can be obtained from: node equations (general), cut equations. c. Solve the balance equations assuming statistical equilibrium. Express all state probabilities by for example the probability of state [ 0 ], p(0). Find p(0) by normalization. d. Calculate the performance measures expressed by the state probabilities. For small values of n we let the non-normalized value of the state probability q(0) equal to one, and then calculate the relative values q(i), (i = 1, 2, . . .). By normalizing we then nd: p(i) = where Qn =
=0

q(i) , Qn

i = 0, 1, . . . , n ,
n

(4.19)

q() .

(4.20)

The time congestion becomes: p(n) = q(n) Qn1 =1 . Qn Qn (4.21)

For large values of n we should use the procedure described below.

4.4. GENERAL PROCEDURE FOR STATE TRANSITION DIAGRAMS

115

4.4.1

Recursion formula

If q(i) becomes very large (e.g. 1010 ), then we may as an intermediate normalization multiply all q(i) by the same constant (e.g. 1010 ) as we know that all probabilities are within the interval [0, 1]. In this way we avoid numerical problems. If q(i) becomes very small, then we may truncate the state space as the density function of p(i) often will be bell-shaped (unimodal) and therefore has a maximum. In many cases we are theoretically able to control the error introduced by truncating the state space (Stepanov, 1989 [109]). We may normalize the state probabilities after each step which implies more calculations, but ensures a higher accuracy. Let the normalized state probabilities for a system with x1 channels be given by: Px1 = {px1 (0), px1 (1), . . . , px1 (x2), px1 (x1)} , x = 1, 2, . . . , (4.22)

where index (x1) indicates that we consider state probabilities for a system with (x1) channels. Let us assume we have the following recursion formula for obtaining qx (x) from r previous state probabilities (often r = 1): qx (x) = f px1 (x1), px1 (x2), . . . , px1 (xr) , x = 1, 2, . . . , (4.23)

where qx (x) will be a relative (non-normalized) state probability. We know the normalized state probabilities for (x 1) channels (4.22), and we want to nd the normalized state probabilities for a system with x channels. The relative values of state probabilities do not change when we increase number of channels by one, so we get: qx (i) = px1 (i) , qx (x) , i = 0, 1, 2, . . . , x1 , i = x. (4.24)

The new normalization constant becomes:


x

Qx =
i=0

qx (i) = 1 + qx (x) ,

The initial value for the recursion is given by p0 (0) = 1. The recursion algorithm thus starts with this value, and we nd the state probabilities of a system with one channel more by (4.24) and (4.25). The recursion is numerically very stable because we in (4.25) divide with a number greater than one.

because in the previous step we normalized the state probabilities ranging from 0 to x1 so they add to one. We thus get: px1 (i) , i = 0, 1, 2, . . . , x 1 , 1 + q (x) x (4.25) px (i) = qx (x) , i = x. 1 + qx (x)

116

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

Example 4.4.1: Calculating probabilities of the Poisson distribution We may calculate the Poisson distribution (4.6) by the above approach by starting with class zero and stopping at a state i where for example q(i) < 1010 . If we want to calculate the Poisson distribution for very large mean values m1 = A = /, then we may start with class m by letting q(m) = 1, where m is equal to the integral part of (m1 + 1). The relative values of q(i) for both decreasing values (i = m1, m2, . . . , 0) and for increasing values (i = m+1, m+2, . . .) will then be decreasing, and we may stop when for example q(i) < 1010 for increasing, respectively decreasing values (or when i = 0). We normalize the state probabilities in each step. In this way we avoid calculating many classes with state probability less than 1010 , and we also avoid problems with underow and overow. 2

Above we calculate all state probabilities. To calculate the time congestion for a loss system we need only store the latest state probability. Let us consider a system with simple birth and death trac process with arrival rate i and departure rate i in state i. Then qx (x) only depends on the previous state probability. By using the cut equation we get the following recursion formula: x1 x1 qx (x) = px1 (x 1) = Ex1 . (4.26) x x The time congestion for x channels is Ex = px (x). Inserting (4.26) into (4.25) we get a simple recursive formula for the time congestion: Ex
x1 Ex1 qx (x) x = = x 1 + qx (x) 1 + x 1 Ex1 x1 Ex1 x1 + Ex1

E0 = 1 .

(4.27)

1 Introducing the inverse time congestion probability Ix = Ex we get:

Ix = 1 +

x Ix1 , x1

I0 = 1 .

(4.28)

This is a general recursion formula for calculating time congestion for all systems with state dependent arrival rates i and homogeneous servers.

4.5

Evaluation of Erlangs B-formula

For numerical calculations the formula (4.10) is not very appropriate, since both n! and An increase quickly so that overow in the computer will occur. If we apply (4.27), then we get the recursion formula: Ex (A) = A Ex1 (A) , x + A Ex1 (A) E0 (A) = 1 . (4.29)

4.5. EVALUATION OF ERLANGS B-FORMULA From a manual calculation point of view, the inverse linear form (4.28) may be simpler: Ix (A) = 1 + x Ix1 (A) , A I0 (A) = 1 ,

117

(4.30)

where In (A) = 1/En (A). This recursion formula is exact, and even for large values of (n, A) there are no round o errors. It is the basic formula for numerous tables of the Erlang Bformula, i.a. the classical table (Palm, 1947 [94]). For very large values of n there are more ecient algorithms. Notice that a recursive formula, which is accurate for increasing index, usually is inaccurate for decreasing index, and vice versa.

Example 4.5.1: Erlangs loss system We consider an Erlang-B loss system with n = 6 channels, arrival rate = 2 calls per time unit, and departure rate = 1 departure per time unit, so that the oered trac is A = 2 erlang. If we denote the non-normalized relative state probabilities by q(i), we get by setting up the state transition diagram the values shown in the following table:

i 0 1 2 3 4 5 6 Total

(i) 2 2 2 2 2 2 2

(i) 0 1 2 3 4 5 6

q(i) 1.0000 2.0000 2.0000 1.3333 0.6667 0.2667 0.0889 7.3556

p(i) 0.1360 0.2719 0.2719 0.1813 0.0906 0.0363 0.0121 1.0000

0.0000 0.2719 0.5438 0.5438 0.3625 0.1813 0.0725 1.9758

i p(i)

(i) p(i) 0.2719 0.5438 0.5438 0.3625 0.1813 0.0725 0.0242 2.0000

We obtain the following blocking probabilities: Time congestion: Trac congestion: E6 (2) = p(6) = 0.0121 . C6 (2) = AY 2 1.9758 = = 0.0121 . A 2
6

Call congestion:

B6 (2) = {(6) p(6)}

i=0

(i) p(i)

0.0242 = 0.0121 . 2.0000

We notice that E = B = C due to the PASTAproperty.

118

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

By applying the recursion formula (4.29) we of course obtain the same results: E0 (2) = 1 , E1 (2) = E2 (2) = 2 21 = , 1+21 3 2 2 3 2+2 2 2 5 3+2
2 3

2 , 5 4 , 19

E3 (2) =

2 5

E4 (2) =

4 2 19 2 4 = 21 , 4 + 2 19 2 2 21 4 2 = 109 , 5 + 2 21 4 2 109 4 4 = 331 = 0.0121 . 6 + 2 109

E5 (2) =

E6 (2) =

Example 4.5.2: Recursion formula for Erlang-B The recursion formul (4.29) and (4.30) are numerically very stable. For larger values of number of channels n, the initial value E0 (A) in (4.30) has only minor inuence. For example, for A = 20 erlang and n = 10 channels we nd with 6 decimals accuracy the same blocking probability independent of whether we start the iteration with the correct value E0 (A) = 1 or the erroneous value E0 (A) = 0. If we choose n = 20 channels, then the rst eight decimals are the same. Errors are eliminated when we iterate with increasing n. Upon the other hand, the recursion formula becomes inaccurate if we iterate with decreasing n because errors then accumulate. In general, if a recursion formula is accurate in one direction, then it will be inaccurate in the opposite direction. 2

Example 4.5.3: Calculation of Ex (A) for large x By recursive application of (4.30) we nd the inverse blocking probability of the B-formula: Ix (A) = 1 +
x

x x (x 1) x(x 1) . . . (x j + 1) x! + + ... + + ... + x A A2 Aj A x j! , j Aj

=
j=0

For small values of number of channels n we include all terms and get the exact value. For large values of n and A this formula can be applied for fast calculation of the B-formula, because we

4.6. PROPERTIES OF ERLANGS B-FORMULA

119

may truncate the sum when the terms of summation become very small. This corresponds to use the general recursion formul (4.24) and (4.25) for calculating state probabilities (or inverse state probabilities) for decreasing x, starting with state n We get the next state q(x 1) by multiplying the previous state q(x) by (x j)/A, and then normalize q(n) and q(x 1) by (1 + q(x 1)). At some stage (x j) < A and the terms start decreasing. We may truncate the summation after k + 1 terms when for example q(x 1) < 1010 . This can be done not only for Ix (A), but also for Ex (A). The truncation level depends on the required accuracy. In this way we avoid calculating many lower states and can control the accuracy. In Example 4.5.2 we were unable to control the accuracy. 2

4.6
4.6.1

Properties of Erlangs B-formula


Non-integral number of channels

For practical applications of Erlangs B-formula (e.g. Sec. 6.4) we need to generalize Erlangs B-formula to non-integral values of the number of channels x. We dene Erlangs extended B-formula by:

Ex (A) =

Ax eA tx et dt

(4.31)

Ax eA . (x + 1, A)

(4.32)

where x and A are real numbers and A > 0. The incomplete gamma function is dened as:

(x, A) =

tx1 et dt ,

(4.33)

where A is a non-negative real numbers and x is a real number, including negative values.. The number of channels may be any positive or negative number, the recursion formula (4.29) will still be valid. In Chap. 6 we shall see how we need to work with a negative and non-integral number of channels when evaluating overow systems.

120

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

For integral values of x which we denote by n this can be rewritten as:


A

tn et dt = = e

+ 0

(t + A)n e(t+A) dt n Aj j
+ 0

n A j=0 n

tnj et dt

= e

A j=0

n! Aj (n j)! j! (n j)!
n

= n! e which inserted in (4.31) yields: En (A) =

j=0

Aj j!

An eA n! eA n j=0 An n! , n Aj j! j=0

Aj j!

q.e.d.

The recursion formula (4.29) will still be valid as we have: (x + 1, A) = Ax eA + x (x, A) . (4.34)

Example 4.6.1: Erlang-B for non-integral number of channels The recursion formula for Erlang-B (4.29) is valid for non-integral number of channels. To calculate Ex (A) for any real value of x, we need to nd the initial value of E{x} (A) for 0 < {x} < 1, where {x} is the fractional part of x. If we want to calculate Ex (A) for large non-integral number of channels, then we will get the correct blocking probability by using the initial value E{x} (A) = 1. For smaller values of x we may use an approximation given in Sec. 4.6.7. To get the exact blocking probability, we have to evaluate the incomplete gamma function in (4.32). 2

4.6.2

Insensitivity

We have the following denition of insensitivity: Insensitivity: A system is insensitive to the holding time distribution if the state probabilities of the system only depend on the mean value of the holding time.

4.6. PROPERTIES OF ERLANGS B-FORMULA

121

It can be shown that Erlangs B-formula, which above is derived under the assumption of exponentially distributed holding times, is valid for arbitrary holding time distributions (holding time = service time). The state probabilities for both the Poisson distribution (4.6) and the truncated Poisson distribution (4.9) only depend on the holding time distribution through the mean value which is included in the oered trac A. It can be shown that all classical loss systems with full accessibility are insensitive to the holding time distribution.

4.6.3

Derivatives of Erlang-B formula and convexity

The Erlang-B formula is a function of the oered trac A and the number of channels n which in general may be a real non-negative number. In some cases when we want to optimise systems we need the partial derivatives of Erlang-B formula.

4.6.4

Derivative of Erlang-B formula with respect to A

Erlangs B-formula is given by 4.10: An An n! = n! , En (A) = A2 An Qn 1+A+ + + 2! n! where {n, A} > 0 are non-negative real numbers, and Qn denotes the denominator (normalizing constant). We nd the derivative with respect to A: Qn En (A) = A where Qn A An1 = 1 + + ... + = Qn1 . A 1 (n 1)! Thus Qn1 is the normalizing constant of a system with n 1 channels. From the recursion formula for Erlang-B (4.29) we have: Ex1 = n A {Ex (A) 1}
An1 (n1)!

Q2 n

An n!

Qn A

(4.35)

122 From (4.35) we then get:

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

En (A) = A =

n A

A Qn1 n! En (A) Qn Qn

n En (A) En (A) {1 En (A)} A (4.36)

En (A) n = 1 En (A) + En (A)2 . A A In a similar way we may obtain higher order derivatives.

4.6.5

Derivative of Erlang-B formula with respect to n

It can be shown that: En (A) = En (A)2 A n

exp(Ax)(1 + x)n ln(1 + x) dx

(4.37)

Esteves & Craveirinha & Cardoso (1995 [30]) presents a numerical algorithm for the evaluation of (4.37). In a way similar to (4.29) for the Erlang-B formula, there is a recursive formula to calculate the derivative of order k of the Erlang-B formula for x channels from the value at x1 channels. Let the inverse value of the derivative of order k be denoted by Ik (A, x), we then have: x k Ik1 (A, x1) + Ik (A, x1) , k = 1, 2, 3, . . . . (4.38) A A where I0 (A, x) = Ix (A) given by (4.30). It can be shown that the Erlang-B formula is convex for n > 1, as this is equivalent to the following requirement: Ik (A, x) = If we multiply both sides by A, we observe that this corresponds to yn < yn+1 (4.14) & (4.16), which intuitively is obvious. The rst explicit proof of this was given by Messerli (1972 [88]) for integral values of n. Jagers & van Doorn (1986 [55]) show that the Erlang B-formula is convex for all real positive values of the number of trunks. This property is e.g. exploited in Moes principle (Sec. 4.8.2).
Example 4.6.2: Call admission control with moving window Erlangs B-formula is valid for arbitrary service times. We may therefore assume that the holding time is equal to a constant h and consider a system with n channels. At an arbitrary instant t all calls accepted during the interval (t h, t) are still being served. We can at most have n calls being served simultaneously, therefore we may at most accept n calls during (t h, t). This is valid for any instant. Thus the system at most accepts n calls in any moving window of length h. This mechanism can be applied for control of cell arrival processes in ATM systems, i.e. for CAC (connection acceptance control). The mechanism works for any arrival process. For a Poisson-arrival process we can calculate the cell loss probability by Erlangs B-formula. 2

En1 (A) En (A) > En (A) En+1 (A) .

(4.39)

4.6. PROPERTIES OF ERLANGS B-FORMULA

123

4.6.6

Inverse Erlang-B formul

The inverse formul, i.e. n as a function of (A, E) and A as a function of (n, E), may be obtained by means of Newton-Raphson iteration (Szybicki, 1967 [112]). From a given initial guess (x0 ) we calculate i.a. sequence which converges to a xed value which must satisfy a function f (x) = 0: f (xk ) . f (xk )

xk+1 = xk The following functions should be used: A(E, n) :

(4.40)

f (A) = A {En (A) E} , A0 = n , 1E

x(E, A) :

f (x) = Ex (A) E .

The initial value of the number of channels x is chosen so that n0 1 < x n0 , where En0 (A) E < En0 1 (A).

The Figs. 4.3 and 4.4 show En (A) for various values of the parameters. The derivatives of Erlangs B-formula are given in Sec. 4.2.2. The numerical problems are also dealt with by Farmer & Kaufman (1978 [31]) and Jagerman (1984 [54]).

Example 4.6.3: Trac carried by the last channel In electro-mechanical telephone systems with rotating selectors sequential hunting with homing was often applied, and the quality of service could be monitored by measuring the carried trac on the last channel (switch) (Brockmeyer, 1957 [10]). As mentioned above the improvement function Fn (A) is equal to the additional trac carried yn+1 when adding an extra channel (n + 1) for xed oered trac A. We also dene the marginal channel capacity a , as the additionally trac carried (in the total system) by adding one channel and keeping the blocking probability E xed. For n = 20 channels and E = 1% we nd A = 12.0306 erlang. The above parameters then become: yn = 0.0817 [erlang], yn+1 = 0.0685 [erlang], a = 0.8072 [erlang] and y = 0.5955 erlang. 2

124

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

4.6.7

Approximations for Erlang-B formula

In the literature various approximations for Ex (A), 0 x < 1, are published. Yngve Rapp (1964 [100]) applies a parabola: Ex (A) = C0 C1 x + C2 x2 , C0 = 1 , C1 = C2 = A+2 , (1 + A)2 + A 1 . (1 + A) ((1 + A)2 + A) where (4.41)

Another approximation is published by Szybicki (1967 [112]). Approximations which are not based on the recursion formula, but directly calculates Ex (A) are developed by Strmer o (1963 [110]) and Mejlbro (1994 [87]). The most accurate values are obtain by using a continued-fraction method of the incomplete gamma function (Lvy-Soussan, 1968 [81]) or by calculating the incomplete gamma function e by numerical integration. The extended B-formula can also be dened for negative values of the number of trunks.

4.7

Fry-Molinas Blocked Calls Held model

In Fry-Molinas BCH (Blocked Calls Held) model (Fry, 1928 [35]), (Molina, 1922 [89], 1927 [90]) a call attempt, which nds all channels busy, will continue to demand service during a time interval, which is equal to the service time it would have obtained, if it was accepted. If a channel becomes idle during this time interval, the call attempt will occupy the channel and keep it busy during the remaining time interval. This model has been applied in North America until the sixties, because is was observed to agree better with the real trac observations than Erlangs Blocked Calls Cleared model. The explanation to this is maybe that USA for many years was dominated by step-by-step systems, where a blocked call attempt often will be repeated (Lost Call Held). When applying alternative routing a call attempt, which is blocked on the direct route, will in general be carried on an alternative route, and therefore there will be no repeated call attempt to the direct route (Lost Call Cleared). The model was already developed by Engset in 1915 in a for many years unknown report (Engset, 1915 [27]). By the introduction of intelligent digital systems with re-arrangement

4.7. FRY-MOLINAS BLOCKED CALLS HELD MODEL 22 20 18 16 14 12 10 8 6 4 2 0 Carried trac


. . . . ............ . . .. ............................. . .............................................................................................. .............................................................................................................................. . .. ..... . . . ..... . . .. ..... . ....... .. .. ..... ..... ................... ... ......................... .. .... .. .... .................... ................... .. ... .............. .............. .. .. .... .... ........... ........... .. ... ......... .. ..... ......... ...... ....... .. .. . .. .... ...... ...... .. .... ...... ...... .. .... ..... ..... .. .... .. .... .... ..... .. .... .... .. .... .... . .. .. .. .... .. .. .... .. .. ...... .... ..... .... .... .... ..... . .... ..... .... ..... .. ... . . . ... ... . ... .... ... .. ... . ... ... ... ... .. ...... .. ...... . .. .. . .. . .... .... .. . .... ... ... .... ... .... . ... ... ... . ... ... . ... .. .. .. .. .. .. .. .. .. .. . . .... ... . . . .. ....................................................................................................................................................................... . . . . . .................................................................................................................................................................................. .. .. .... .. . .......... .. ........................................... . . . . . ............................................. . . . . . . . .... .. ...... ........................................... .......................................... .. .... .. .......... ........... .... ...................... .. ...... .............. .. ...... ............. . ... . . ......... ......... .. .... .. ... ....... ....... .. ... .. .... ...... ...... .. .... ....... .. ... ....... ..... ...... ..... ...... .... .. .... .. . . . . ... .... ... .. .. .... .. ... .. . . . ... .. .. . ..... .. ..... .. .... .... .... ... ... ... ... ... ... ... .. . .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. . . .. ..

125

n=20

M B

n=10

M B

10

15

20

25

30

35 40 Oered trac

Figure 4.6: The carried trac as a function of the oered trac for Erlangs LCC model (curve B), Fry-Molinas BCH model (curve M) and Erlangs waiting time system (curve C, Chap. 9). By Fry-Molinas model, which corresponds to rearrangement (call packing), we can increase the utilisation as compared with Erlangs B-formula. the model has again become of current interest to modelling of e.g. mobile communication systems and service-integrated broadband systems. Fry-Molinas BCH-model is based upon the non-truncated state-dependent Poisson arrival processes, e.g. BPPtrac (Binomial distribution (5.4), Poisson distribution (4.6), and Pascal distribution (5.65)). If we denote the relative state probabilities by q(i) (i = 0, 1, 2, . . .), then we nd the absolute state probabilities by a normalization: q(i) , p(i) = Q() Q() =
i=0

q(i) .

(4.42)

For Fry-Molinas BCH model we get the following state probabilities pm (i): p(i) , 0 i < n , pm (i) = p(j) , i = n .
j=n

(4.43)

The time congestion E is by denition the proportion of time all channels are busy: E = pm (n) = 1 Q(n 1) . (4.44)

126

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

The trac congestion C is from a numerical point of view best obtained in the following way. As the oered trac A per denition is equal to the trac carried by in innite trunk group we have: A=
i=0

i p(i) .

(4.45)

The lost trac is: A = The trac congestion therefore becomes:

i=n+1

(i n) p(i) . A . A

(4.46)

C=

4.8

Principles of dimensioning

When dimensioning service systems we have to balance grade-of-service requirements against economic restrictions. In this chapter we shall see how this can be done on a rational basis. In telecommunication systems there are several measures to characterize the service provided. The most extensive measure is Quality-of-Service (QoS), comprising all aspects of a connection as voice quality, delay, loss, reliability etc. We consider a subset of these, Grade-of-Service (GoS) or network performance, which only includes aspects related to the capacity of the network. By the publication of Erlangs formul there was already before 1920 a functional relationship between number of channels, oered trac, and grade-of-service (blocking probability) and thus a measure for the quality of the trac. At that time there were direct connections between all exchanges in the Copenhagen area which resulted in many small and big channel groups. If Erlangs B-formula were applied with a xed blocking probability for dimensioning these groups, then the utilization in small groups would become low. Kai Moe (18931949), chief engineer in the Copenhagen Telephone Company, made some quantitative economic evaluations and published several papers, where he introduced marginal considerations, as they are known today in mathematical economics. Similar considerations were later done by P.A. Samuelson in his famous book, rst published in 1947. On the basis of Moes works the fundamental principles of dimensioning are formulated for telecommunication systems in Moes Principle (Jensen, 1950 [58]).

4.8.1

Dimensioning with xed blocking probability

For proper operation, a loss system should be dimensioned for a low blocking probability. In practice the number of channels n should be chosen so that E1,n (A) is about 1% to avoid

4.8. PRINCIPLES OF DIMENSIONING

127

overload due to many non-completed and repeated call attempts which both load the system and are a nuisance to subscribers (Cf. Bbusy [60]). n A (E = 1%) y F1,n (A) E [%] y F1,n (A1 ) 1 2 5 10 20 0.596 0.052 3.640 0.696 0.173 50 0.750 0.099 5.848 0.856 0.405 100 84.064 0.832 0.147 8.077 0.927 0.617

0.010 0.153 1.361 4.461 12.031 37.901 0.010 0.076 0.269 0.442 0.000 0.001 0.011 0.027 1.198 1.396 1.903 2.575 0.012 0.090 0.320 0.522 0.000 0.002 0.023 0.072

A1 = 1.2 A 0.012 0.183 1.633 5.353 14.437 45.482 100.877

Table 4.1: Upper part: For a xed value of the blocking probability E = 1% n trunks can be oered the trac A. The average utilization of the trunks is y, and the improvement function is F1,n (A) (4.16). Lower part: The values of E, y and F1,n (A) are obtained for an overload of 20%. Tab. 4.1 shows the oered trac for a xed blocking probability E = 1% for some values of n. The table also gives the average utilization of channels, which is highest for large groups. If we increase the oered trac by 20 % to A1 = 1.2 A, we notice that the blocking probability increases for all n, but most for large values of n. From Tab. 4.1 two features are observed: a. The utilisation a per channel is, for a given blocking probability, highest in large groups (Fig. 4.4). At a blocking probability E = 1 % a single channel can at most be used 36 seconds per hour on the average! b. Large channel groups are more sensitive to a given percentage overload than small channel groups. This is explained by the low utilization of small groups, which therefore have a higher spare capacity (elasticity). Thus two conicting factors are of importance when dimensioning a channel group: we may choose among a high sensitivity to overload or a low utilization of the channels.

4.8.2

Improvement principle (Moes principle)

As mentioned in Sec. 4.8.1 a xed blocking probability results in a low utilization (bad economy) of small channel groups. If we replace the requirement of a xed blocking probability

128

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

with an economic requirement, then the improvement function F1,n (A) (4.16) should take a xed value so that the extension of a group with one additional channel increases the carried trac by the same amount for all groups. In Tab. 4.2 we show the congestion for some values of n and an improvement value F = 0.05. We notice from the table that the utilization of small groups becomes better corresponding to a high increase of the blocking probability. On the other hand the congestion in large groups decreases to a smaller value. See also Fig. 4.8. If therefore we have a telephone system with trunk group size and trac values as given in the table, then we cannot increase the carried trac by rearranging the channels among the groups. n A (FB = 0.05) y E1,n (A) [%] A1 = 1.2 A E {%} y 1 0.271 0.213 21.29 0.325 24.51 0.245 0.067 2 5 10 20 0.593 0.97 3.55 0.693 0.169 50 35.80 0.713 0.47 42.96 3.73 0.827 0.294 100 78.73 0.785 0.29 94.476 4.62 0.901 0.452

0.607 2.009 4.991 11.98 0.272 0.387 0.490 10.28 13.30 3.72 6.32 1.82 4.28

0.728 2.411 5.989 14.38 0.316 0.452 0.573 0.074 0.093 0.120

F1,n (A1 )

Table 4.2: For a xed value of the improvement function we have calculated the same values as in table 4.1. This service criteria will therefore in comparison with xed blocking in Sec. 4.8.1 allocate more channels to large groups and fewer channels to small groups, which is the trend we were looking for. The improvement function is equal to the dierence quotient of the carried trac with respect to number of channels n. When dimensioning according to the improvement principle we thus choose an operating point on the curve of the carried trac as a function of the number of channels where the slope is the same for all groups (A/n = constant). A marginal increase of the number of channels increases the carried trac with the same amount for all groups. It is easy to set up a simple economical model for determination of F1,n (A). Let us consider a certain time interval (e.g. a time unit). Denote the income per carried erlang per time unit by g. The cost of a cable with n channels is assumed to be a linear function: cn = c0 + c n . (4.47)

The total costs for a given number of channels is then (a) cost of cable and (b) cost due to lost trac (missing income): Cn = g A E1,n (A) + c0 + c n , (4.48)

4.8. PRINCIPLES OF DIMENSIONING

129

Here A is the oered trac, i.e. the potential trac demand on the group considered. The costs due to lost trac will decrease with increasing n, whereas the expenses due to cable increase with n. The total costs may have a minimum for a certain value of n. In practice n is an integer, and we look for a value of n, for which we have (cf. Fig. 4.7): 25
.. .. .. .. ... . ... .... .... ..... ..... .. .. .. .. .. ... .. ... .. .. .. .. .. ... .. ... .. .. .. .. .. .. .. ... .. .. .. .. .. . .. .. .. ... .. .. .. ... ... .. .. .. ... .. ... .. ... .. .. ... .. .. ... ... .. . . .. .. .. .. ... ... .. .. ... .. .. ... .. ... .. ... .. .. ... .. ... .. .. . ... .. .. .. ... .. . .. .. .. .. .. .. .. ... ... .. .. .. .. ... .. ... .. .. .. ... .. .. ... .. .. .. .. .. ... .. ... .. .. .. ... .. ... .. .. . . .. .... .. ... .. ... .. ... .. .. . ... .. ... .. .. .. ... ... .. .. .. ... .. ... .. ... ... ... ... .. .. .. .. ... ... ... ... .. ... .. ... ... ... ... .. ... ... ... .. ... ... ... ... .. .. ... ... ... ... . . .. ... .. ... ... ... ... ... ... .. ... .. ... .... ... .... ... ..... ... .. ..... ... .. .. . . .... .... ..... . ... .. ...... .. .... ... ..... .... ... .... ...... ... .. .. ..... ... .... ... ..... ... .... ... .. ........................ ..... ... ....................... ..... .... .. ... .... .. . .. .. ... ... . ... . ... ... .. . . .. ... . ..... .. .. . . ... . ...... .. .. ... .. ... .. ... . ... . . .. . . .. ... . ... .. .. ... ... . .. . ... ... .. . . ... ... .. .. ... ... . . .. . .. ... . ... . . . .. . .. ... ... .. ... .. ..... . . ....... . ..... . . ... .. ... .. . .. . . . ... .. . ... .. . . . . ... .. ... .. .. .. .. . ... .. ... . . ... .. . ... . .. . ... .. ... ... ... .. . . . . . .. ... ... .. . . . .. ... ... . .. ... .. ... . .. .. ... .. ... .. . ... ... . . .. .. . ... . ... .. . ... .. .. . ... .. .. ... . .. ... .. . . . ...... .. . ... .. ... ... ... . . . .. .. .... . .. ... ...... . . ..... .... .. . . . .. . ... . ... ... ... . ... ... . ... . ... . ... ... . ... . .. ... ...... ... . ...... . ... ..... ... . . . .. ... . . ... ... . ............ ... ............. . ... . . . . ............................................ . ..................................................... . ... . ... ..........

Costs

20

Total costs

15

10

Blocked trac

Cable

10

20

30

40

50 60 No. of trunks n

Figure 4.7: The total costs are composed of costs for cable and lost income due to blocked trac (4.48). Minimum of the total costs are obtained when (4.49) is fullled, i.e. when the two cost functions have the same slope with opposite signs (dierence quotient). (FB = 0.35, A = 25 erlang). Minimum is obtained for n = 30 trunks. Cn1 > Cn As E1,n (A) = En (A) we get: A {En1 (A) En (A)} > or: F1,n1 (A) > FB F1,n (A) , where: FB = c cost per extra channel = . g income per extra channel (4.50) (4.51) c A {En (A) En+1 (A)} , g (4.49) and Cn Cn+1 .

130

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

FB is called the improvement value. We notice that c0 does not appear in the condition for minimum. It determines whether it is protable to carry trac at all. We must require that for some positive value of n we have: g A {1 En (A)} > c0 + c n . Blocking probability E [%] FB (4.52) Fig. 4.8 shows blocking probabilities for some values of FB . We notice that the economic 15
. .. . . .. . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . . . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . . . .. . . .. . . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . . .. . . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . . . .. . . .. . . .. . . .. . . .. . . . .. . .. .. . .. .. . .. . . .. . . .. . . .. .. . .. .. . . .. . .. . .. .. .. . .. . .. . .. . .. .. . .. . .. . .. .. . . . .. . . . . .. .. . . . .. . . . . . . . .. . . . .. . . . . . . . .. . . . .. . . . . . . . . . .. . .. . . . . . . .. . . . .. . . . . . . . .. .. . . . . . . .. . .. . . . . . . .. . . . .. . . . . . . .. . . . .. . . . ... . . . ... . . . . . . ... . . . ... . . . . ... . . . . ... . . . . . . ... ... . . . . . . . . . ... ... . . . . . . ... ... . . . . . . ... . ... . . . . . . ... . . ... . . . .. . . ... .. ... . . .... . . .... .. . . .. . . .... .... . . . . .. .... . . .... .. . . . . . .... .... . . .. . . .. .... .... . . ..... .. . . ..... .. . . . . ..... ..... .. . . .. . . ..... ..... . . . . .. ..... . ...... .. . . . ...... . ...... .. . . .. . ...... . ...... . . .. . .. ....... . . ....... . . ... . ....... ... . ........ . . . ........ . ... ........ ... . . . ......... . ......... . ... . ... . . .......... . .......... ... . . ... . ........... . ........... . . ... ... . . ........... ............ . . ... .. ... . ............. .. ............. . .... .... . .............. .............. . .. . .... .. ................ .... ................ . . .... ............. ..... .. . ......... .. . ..... . ..... . .. ...... . .. ...... . . ...... .. ....... .. . . ....... . ....... .. ... . ........ . ........ ... . ......... ... . .......... . ... . ........... ... ........... .. ... .. ............. ............. ... .... ............... .. ............... .... .. ................ .... ................. ..... .. .................... ..... .. .................... ...... ....................... ...... ... ......................... ... ....... .............................. ....... ............................. ... ......... ... ..... ......... ... ........... ... ............ .... .............. .... ................ .................. ...... ...... .................... .......................... ........ ........ .............................. ........................................ ............ ............ .............................................. .............................................................. ................. .................. ............................................. ................................ ................................ ..................................................................... ......................................................................... ............................................................... ..........................................................

10

0.35

0.20 0.10 0.05

10

20

30

40

50

60

70

80 90 100 Oered trac A

Figure 4.8: When dimensioning with a xed value of the improvement value FB the blocking probabilities for small values of the oered trac become large (cf. Tab. 4.2). demand for prot results in a certain improvement value. In practice we choose FB partly independent of the cost function. In Denmark the following values have been used: FB = 0.35 for primary trunk groups. FB = 0.20 for service protecting primary groups. FB = 0.05 for groups with no alternative route. (4.53)

4.8. PRINCIPLES OF DIMENSIONING


2010-02-22

131

132

CHAPTER 4. ERLANGS LOSS SYSTEM AND BFORMULA

Chapter 5 Loss systems with full accessibility


In this chapter we generalize Erlangs classical loss system to state-dependent Poisson-arrival processes, which include the so-called BPP-trac models: Binomial case: Engsets model, Poisson case: Erlangs model, and Pascal (Negative Binomial) model: PalmWallstrms model. o Erlangs model describers random trac. Engsets model describes trac which is more smooth than random trac. Negative Binomial model describes trac which is more bursty than random trac and includes models with Pareto-distributed inter-arrival times (heavytailed trac) and trac with batch arrivals. These models are all insensitive to the service time distribution. Engset and Pascal models are even insensitive to the distribution of the idle time of sources. It is important always to use trac congestion as the important performance metric. After the introduction in Sec. 5.1 we go through the basic classical theory. In Sec. 5.2 we consider the Binomial case, where the number of sources S (subscribers, customers, jobs) is limited and the number of channels n always is sucient (S n). This system is dealt with by balance equations in the same way as the Poisson case (Sec. 4.2). We consider the strategy Blocked-Calls-Cleared (BCC). In Sec. 5.3 we restrict the number of channels so that it becomes less than the number of sources (n < S). We may then experience blocking and we obtain the truncated Binomial distribution, which also is called the Engset distribution. The probability of time congestion E is given by Engsets formula. With a limited number of sources, time congestion, call congestion, and trac congestion dier, and the PASTA property is replaced by the general arrival theorem, which tells that the state probabilities of the system observed by a customer (call average) is equal to the state probability of the system without this customer (time average). Engsets formula is computed numerically by a formula recursive in the number of channels n derived in the same way as for Erlangs

134

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

B-formula. Also formul recursive in number of sources S and simultaneously in both n & S are derived. In Sec. 5.6 we consider the Negative Binomial case, also called the Pascal case, where the arrival intensity increases linearly with the state of the system. If the number of channels is limited, then we get the truncated Negative Binomial distribution (Sec. 5.7). Finally, in Sec. 5.8 we consider a Batch Poisson arrival process and show it is similar to the Pascal case.

5.1

Introduction

We consider a system with same structure (full accessibility group) and strategy (Lost-CallsCleared) as in Chap. 4, but with more general trac processes. In the following we assume the service times are exponentially distributed with intensity (mean value 1/); the trac process then becomes a birth & death process, a special Markov process, which is easy to deal with mathematically. Usually we dene the state of the system as the number of busy channels. All processes considered in Chapter 4 and 5 are insensitive to the service time distribution, i.e. only the mean service time is of importance to the state probabilities. The service time distribution itself has no inuence. Denition of oered trac: In Sec. 1.7 we dene the oered trac A as the trac carried when the number of servers is unlimited, and this denition is used for both the Engset-case and the Pascal-case. The oered trac is thus independent of the number of servers. Only for stationary renewal processes as the Poisson arrival process this denition is equivalent to the average number of calls attempts per mean service time. In Engset and Pascal cases the arrival processes are not renewal processes as the mean inter-arrival time depends on the actual state. Carried trac is by denition the mean value of the state probabilities (average number of busy channels). Peakedness is dened as the ratio between variance and mean value of the state probabilities. For oered trac the peakedness is considered for an innite number of channels. We consider the following arrival processes, where the rst case already has been dealt with in Chap. 4: 1. Erlang-case (P Poisson-case): The arrival process is a Poisson process with intensity . This type of trac is called random trac or Pure Chance Trac type One, PCTI. We consider two cases: a. n = : Poisson distribution (Sec. 4.2). The peakedness is in this case equal to one: Z = 1. b. n < : Truncated Poisson distribution (Sec. 4.3).

5.2. BINOMIAL DISTRIBUTION

135

2. Engset-case (B Binomial-case): There is a limited number of sources S. Each source has a constant call (arrival) intensity when it is idle. When it is busy the call intensity is zero. The arrival process is thus state-dependent. If i sources are busy, then the arrival intensity is equal to (S i) . This type of trac is called Pure Chance Trac type Two, PCTII. We consider the following two cases: a. n S: Binomial distribution (Sec. 5.2). In this case the peakedness is less than one: Z < 1. b. n < S: Truncated Binomial distribution (Sec. 5.3). 3. Palm-Wallstrmcase (P Pascal-case): o There is a limited number of sources S. If at a given instant we have i busy sources, then the arrival intensity equals (S + i) . Again we have two cases: a. n = : Pascal distribution = Negative Binomial distribution (Sec. 5.6). In this case peakedness is greater than one: Z > 1. b. n < : Truncated Pascal distribution (truncated negative Binomial distribution) (Sec. 5.7). As the Poisson process may be obtained by an innite number of sources with a limited total arrival intensity , the Erlang-case may be considered as a special case of the two other cases:
S, 0

lim

(S i) =

S, 0

lim

S = .

For any state 0 i n (n nite) we then have a constant arrival intensity . This is also seen from Palms theorem (Sec. 3.6.1). The three trac types are referred to as BPP-trac according to the abbreviations given above (Binomial & Poisson & Pascal). As these models include all values of peakedness Z > 0, they can be used for modeling trac with two parameters: mean value A and peakedness Z. For arbitrary values of Z the number of sources S in general becomes non-integral. Performancemeasures: The performance parameters for loss systems are time congestion E, Call congestion B, trac congestion C, and the utilization of the channels. Among these, trac congestion C is the most important characteristic. These measures are derived for each of the above-mentioned models.

5.2

Binomial Distribution

We consider a system with a limited number of sources S. Sources is a generic term for subscribers, users, terminals, etc. The individual source alternates between the states idle

136

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

and busy. A source is idle during a time interval which is exponentially distributed with intensity , and the source is busy during an exponentially distributed time interval (service time, holding time) with intensity (Fig. 5.2). This kind of sources are called sporadic sources or on/o sources. This type of trac is called Pure Chance Trac type Two (PCTII ), or pseudo-random trac. S sources n channels

Figure 5.1: A full accessible loss system with S sources, which generates trac to n channels. The system is shown by a so-called chicko-gram. The beak of a source symbolizes a selector which points upon the channels (servers) among which the source may choose.

In this section the number of channels (trunks, servers) n is assumed to be greater than or equal to the number of sources (n S), so that no calls are lost. Both n and S are assumed to be integers, but it is possible to deal with nonintegral values (Iversen & Sanders, 2001 [48]). State

Busy

Idle

. . . . .. .. .. . ... . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ................................................................................................................................................... .................................... ... .............................................................................................................................................................................................. . . .. .. . . . .. . . . . . . . . .. .. .. . .. .. .. .. .. . . ... . .. ... . . . . ... .... ... ... . . ........ . .............................. ......... .............................. 1.................................. .. .... 1.................................. . .. .. ....................... . .. .. . .. . . ...................... . . . . . . . . . . . . . .

Time

arrival

departure

Figure 5.2: Every individual source is either idle or busy, and behaves independent of all other sources. S (S 1) 2

.... ........... ............. ............. ...... ......... ..... .... ... ... ... .... ... .. .... .... ........... ..... .... .... ............ .... .. .............. ..... ........ .. ... . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. . ... .. . . .. .. ... .. ................ ...... ........ ........ ....... .. .. .................. ... ... ... ... ..... .... ... ..... ... ..... ... ...... ........ .............. ............. .......... ....

........... ............... ................ . .................. ..... ... .... ... ........ ..... ... ........ ... .... ........... .... ........... ..... . ... .. . .. .. .... ... ... ... .. . .. . . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . .. .. ... . . . . .... . .............. . ............... .. . ................... ............... ... .. .. . .. .. .... .... ... ..... ...... ........ ................. ................. ............... ....

S 1

(S 1)

Figure 5.3: State transition diagram for the Binomial case (Sec. 5.2). The number of sources S is less than or equal to the number of channels n (S n).

5.2.1

Equilibrium equations

We are interested in the steady state probabilities p(i), which are the proportion of time the process spends in state [ i ]. Our calculations are based on the state transition diagram shown

arrival

..... .......... .... ..... .. ... .. .. .. .. . . . . . . . . . . . . . . . . . .. .. . .. .. . ... ............. ... ........

5.2. BINOMIAL DISTRIBUTION in Fig. 5.3. We consider cuts between neighboring states and nd: S p(0) = p(1) , ... ...

137

(S 1) p(1) = 2 p(2) , (S i 1) p(i 1) = i p(i) ,

All state probabilities are expressed by p(0): p(1) p(2) ... p(i) S p(0) = (S 1) = p(1) 2 ...

(S i) p(i) = (i + 1) p(i + 1) , ... ... 1 p(S 1) = S p(S) . S = p(0) 1 S = p(0) 2 ... ...
i 1

(5.1)

,
2

(S i 1) S = p(i 1) = p(0) i i

,
i+1

(S i) p(i) p(i + 1) = (i + 1) ... ... p(S) = p(S 1) S

S = p(0) i+1 ... ... = p(0) S S


S

The total sum of all probabilities must be equal to one: 1 = p(0) S 1+ 1


S

S + 2

S + + S

= p(0) 1 +

where we have used Newtons Binomial expansion. By letting = / we get: p(0) = 1 . (1 + )S (5.2)

138

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

The parameter is the oered trac per idle source (number of call attempts per time unit for an idle source the oered trac from a busy source is zero) and we nd: p(i) = S i S i i 1 (1 + )S
i

1+

1 1+

Si

i = 0, 1, . . . , S ,

0 S n,

which is the Binomial distribution (Tab. 3.1). Finally, we get by introducing the oered trac per source a, dened as the trac carried per source when there is no blocking: a = 1/ = = , 1+ + 1/ + 1/ S i ai (1 a)Si , i = 0, 1, . . . , S , 0 S n. (5.3)

p(i) =

(5.4)

In this case a call attempt from an idle source is never blocked, and the carried trac per source is equal to the oered trac per source a, and this is the probability that a source is busy at a random instant (the proportion of time the source is busy). This is also observed from Fig. 5.2, as all arrival and departure points on the time axes are regeneration points (equilibrium points). A cycle from start of a busy state (arrival) till start of the next busy state is representative for the whole time axes, and time averages are obtained by averaging over one cycle. The Binomial distribution obtained in (5.4) is in teletrac theory sometimes called the Bernoulli distribution, but this should be avoided as we in statistics use this name for a two-point distribution.
Example 5.2.1: Binomial distribution and convolution Formula (5.4) can be derived by elementary considerations. All subscribers can be split into two classes: idle subscribers and busy subscribers. The probability that an arbitrary subscriber is busy is y = a, which is independent of the state of all other subscribers as the system has no blocking and call attempts always are accepted. Then the state of a single source is given by the Bernoulli distribution: 1a , i = 0, p1 (i) = (5.5) a, i = 1. which has a nite mean value a. If we in total have S subscribers (sources), then the probability pS (i) that i sources are busy at an arbitrary instant is given by the Binomial distribution ((5.4) & Tab. 3.1): pS (i) = S i a (1a)Si , i
S

pS (i) = 1 ,
i=0

(5.6)

which has the mean value S a. If we add one source more to the system, then the distribution of the total number of busy sources is obtained by convolution of the Binomial distribution (5.6) and

5.2. BINOMIAL DISTRIBUTION


the Bernoulli distribution (5.5): pS+1 (i) = pS (i) p1 (0) + pS (i1) p1 (1) = S i S a (1a)Si (1a) + ai1 (1a)Si+1 a i i1 S i + S i1 ai (1a)Si+1

139

S+1 i a (1a)Si+1 , i

q.e.d. 2

5.2.2

Trac characteristics of Binomial trac

We summarize denitions of parameters given above: = call intensity per idle source, 1/ = mean service (holding) time, = / = oered trac per idle source. (5.7) (5.8) (5.9)

By denition, the oered trac of a source is equal to the carried trac in a system with no congestion, where the source freely alternates between states idle and busy. Therefore, we have the following denition of the oered trac: a = = oered trac per source, 1+ = total oered trac, 1+ (5.10)

A = Sa=S

(5.11) (5.12) (5.13) (5.14)

= carried trac per source Y = S = total carried trac

y = Y /n = carried trac per channel with random hunting

Oered trac per source is a dicult concept to deal with because the proportion of time a source is idle depends on the congestion. The number of calls oered by a source depends on the number of channels (feed-back): a high congestion results in more idle time for a source and thus in more call attempts.

140 Time congestion:

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

E = 0, E = p(n) = an , Carried trac:


S

S < n, S = n.

= S=

i=0

i p(i) (5.15)

= S a = A,

which is the mean value of the Binomial distribution (5.4). In this case with no blocking we of course have a = and: Trac congestion: C= Number of call attempts per time unit:
S

AY = 0. A

(5.16)

=
i=0

p(i) (S i)
S

= S

i=0

i p(i) = S S a

= S (1 ) , where S (1 ) is the avetrage number of idle sources. As all call attempts are accepted we get: Call congestion: B = 0. Trac carried by channel i: Random hunting: y= Y S = . n n (5.18) (5.17)

Sequential hunting: complex expression derived by L.A. Joys (1971 [64]). Improvement function: Fn (A) = Yn+1 Yn = 0 . (5.19)

5.3. ENGSET DISTRIBUTION Peakedness of the Binomial distribution is (Tab. 3.1): Z = S a (1 a) 2 = , m1 Sa A 1 = < 1. S 1+

141

Z = 1a=1

(5.20)

We observe that the peakedness Z is independent of the number of sources and always less than one. Therefore it corresponds to smooth trac. Duration of state i: This is exponentially distributed with rate: (i) = (S i) + i , 0 i S n. (5.21)

Finite source trac is characterized by number of sources S and oered trac per idle source . Alternatively, we often use oered trac A and peakedness Z. From (5.11) and (5.20) we get the following relations between the two set of parameters (S, ) and (A, Z): A = S Z = , 1+ (5.22) (5.23)

1 , 1+ 1Z , Z A . 1Z

and by solving these equations with respect to A and Z we get: = S = (5.24) (5.25)

5.3

Engset distribution

The only dierence in comparison with Sec. 5.2 is that number of sources S now is greater than or equal to number of trunks (channels), S n. Therefore, call attempts may experience congestion.

5.3.1

State probabilities

The cut equations are identical to (5.1), but they only exist for 0 i n (Fig. 5.4). The normalization equation becomes: S 1 = p(0) 1 + 1 S + + n
n

142
S

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY


.... .......... ............. .................. ...... ......... ..... .... .... ... ..... ... ... .... .... ... ........ ..... .... .... ... ... .... ............... . .. .... ........ ... .. . .. ... .. .. .. .. .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. .. .. ... . . .. .. ... .. . ... ... . .... . ............... ............... .. .. ... ......... ..... ... .. . ... ........ ....... ... .... . .... ............... ............... .............. .............

(S 1)

.... ............... ................ .................. ...... ......... ... ... ........ ...... .. ... .... .... ........... ..... .... .. . . . .. ... ... ... ... . .. .. .. .. . . . . . . . . . . . . . . . . . .. . .. .. .. .. .. . . .... ... . .... .... ... .. .. ... ... ......... ..... ... .... ... ........ ....... . .. ... ..... .. ................ ............... ........ ..... ...... ....

(S i)

.... ............... ................ .................. ...... ......... ... ... ........ ...... ... ........ ... .... ........... .... ........... ..... .. . . .. . . . .. ... ... ... . ... .. ... . .. .. .. .. .. .. .. .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. .. . . .. . . ... .. ... . ... .. . .... ..... ......... ..... ...... .... .. .... . ... ... . ... ......... .... ... .... . ... .... ... ........ ..... ... ...... ........ ................ ................. ...... .... ....

(S n+1)

n1

(n1)

Figure 5.4: State transition diagram for the Engset case with S > n, where S is the number of sources and n is the number of channels. From this we obtain p(0), and by letting = / the state probabilities become: S i
n

p(i) =

i S j j

0 i n.

(5.26)

j=0

In the same way as above we may by using (5.10) rewrite this expression to a form, which is analogue to (5.4): S i
n

p(i) =

ai (1 a)Si S j aj (1 a)Sj

0 i n,

(5.27)

j=0

from which we directly observe why it is called a truncated Binomial distribution (cf. truncated Poisson distribution (4.10)). The distribution (5.26) & (5.27) is called the Engset distribution after the Norwegian T. Engset (18651943) who rst published the model with a nite number of sources (1918 [28]).

5.3.2

Trac characteristics of Engset trac

The Engset-distribution results in more complicated calculations than the Erlang loss system. The essential issue is to understand how to nd the performance measures directly from the state probabilities using the denitions. The Engset system is characterized by the parameters = / = oered trac per idle source, S = number of sources, and n = number of channels. Time congestion E: this is by denition equal to the proportion of time the system is blocking new call attempts, i.e. p(n) (5.26): S n n
n

En,S () = p(n) =

j=0

S j

,
j

S n.

(5.28)

5.3. ENGSET DISTRIBUTION

143

Call congestion B: this is by denition equal to the proportion of call attempts which are lost. Only call attempts arriving at the system in state n are blocked. During one unit of time we get the following ratio between the number of blocked call attempts and the total number of call attempts: Bn,S () = p(n) (S n) p(j) (S j) S n (S n) n
n

j=0

j=0

S j

(S j)

Using S i we get: S1 n n
n

Si = S

S1 , i

Bn,S () =

j=0

S1 j j

Bn,S () = En,S1 () ,

S n.

(5.29)

This result may be interpreted as follows. The probability that a call attempt from a random idle source (subscriber) is blocked is equal to the probability that the remaining (S1) sources occupy all n channels. This is called the arrival theorem, and it can be shown to be valid for both loss and delay systems with a limited number of sources. The result is based on the product form among sources and the convolution of sources. As E increases when S increases, we have Bn,S () = En,S1 () < En,S ().

Theorem 5.1 Arrival-theorem: For full accessible systems with a limited number of sources, a random source upon arrival will observe the state of the system as if the source itself does not belong to the system.

The PASTAproperty is included in this case because an innite number of sources less one is still an innite number.

144

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

Carried trac: By applying the cut equation between state [ i 1 ] and state [ i ] we get:
n

=
i=1 n

i p(i) (S i + 1) p(i 1)

(5.30)

=
i=1 n1

=
i=0 n

(S i) p(i) (S i) p(i) (S n) p(n) ,

(5.31)

=
i=0

= (S Y ) (S n) E , {S (S n) E} . 1+

(5.32)

as E = En,S () = p(n). This is solved with respect to Y : Y = (5.33)

Trac congestion C = Cn,S (A). This is the most important congestion measure. The oered trac is given by (5.22) and we get: C = AY A S {S (S n) E} 1+ 1+ , S 1+ Sn E. S (5.34)

C =

We may also nd the carried trac if we know the call congestion B. The number of accepted call attempts from a source which on the average is idle 1/ time unit before it generate one call attempt is 1 (1 B), and each accepted call has an average duration 1/. Thus the carried trac per source, i.e. the proportion of time the source is busy, becomes: = The total carried trac becomes: Y =S=S (1 B) . 1 + (1 B) (5.35) (1 B)/ . 1/ + (1 B)/

5.3. ENGSET DISTRIBUTION

145

Equalizing the two expressions for the carried trac (5.33) & (5.35) we get the following relation between E and B: S B E= . (5.36) S n 1 + (1 B) Number of call attempts per time unit: =
i=0 n

p(i) (S i) (5.37)

= (S Y ) ,

where Y is the carried trac (5.30). Thus (S Y ) is the average number of idle sources, which is evident. Historically, the total oered trac was earlier dened as /. This is, however, misleading because we cannot assign every repeated call attempt a mean holding time 1/. Also it has caused a lot of confusion because the oered trac by this denition depends upon the system (number of channels). With few channels available many call attempts are blocked and the sources are idle a higher proportion of the time and thus generate more call attempts per time unit. Lost trac: A = AC = S Sn E 1+ S (5.38)

(S n) E. 1+

Duration of state i: This is exponentially distributed with intensity: (i) = (S i) + i , 0 i < n, i = n. (5.39)

(n) = n , Improvement function:

Fn,S (A) = Yn+1 Yn .

(5.40)

Example 5.3.1: Call average and time average Above we have under the assumption of statistical equilibrium dened the state probabilities p(i) as the proportion of time the system spends in state i, i.e. as a time average. We may also study how the state of the system looks when it is observed by an arriving or departing source (user) (call average). If we consider one time unit, then on the average (S i) p(i) sources will observe the system in state [ i ] just before the arrival epoch, and if they are accepted they will bring the system

146

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

into state [ i + 1 ]. Sources observing the system in state n are blocked and remain idle. Therefore, arriving sources observe the system in state [ i ] with probability: n,S, (i) = (S i) p(i) (S j) p(j) , i = 0, 1, . . . n . (5.41)

j=0

In a way analogue to the derivation of (5.29) we may show that in agreement with the arrival theorem (Theorem 5.1) we have as follows: n,S, (i) = pn,S1, (i) , i = 0, 1, . . . , n . (5.42)

When a source leaves the system and looks back, it observes the system in state [ i 1 ] with probability: i p(i) n,S, (i 1) = n , i = 1, 2, . . . , n . (5.43)
j=1

j p(j)

By applying cut equations we immediately get that this is identical with (5.41), if we include the blocked customers. On the average, sources thus depart from the system in the same state as they arrive to the system. The process will be reversible and insensitive to the service time distribution. If we make a lm of the system, then we are unable to determine whether time runs forward or backward. 2

5.4

Relations between E, B, and C

From (5.36) we get the following relation between E = En,S () and B = Bn,S () = En,S1 (): E = S B S n 1 + (1 B) (S n) E (1 + ) S + (S n) E or 1 Sn = E S 1 1 = B 1+ (1 + ) 1 B . , (5.44)

B =

or

S 1 + Sn E

(5.45)

The expressions to the right-hand side are linear in the reciprocal blocking probabilities. In (5.34) we obtained the following simple relation between C and E: C = E = Sn E, S S C. Sn (5.46) (5.47)

5.5. EVALUATION OF ENGSETS FORMULA If we in (5.46) express E by B (5.44), then we get C expressed by B: C = B , 1 + (1 B) (1 + ) C . 1+C

147

(5.48)

B =

(5.49)

This relation between B and C is general for any system and may be derived from carried trac as follows. The carried trac Y corresponds to (Y ) accepted call attempts per time unit. The average number of idle sources is (S Y ), so the average number of call attempts per time unit is (S Y ) (5.37). The call congestion is the ratio between the number of rejected call attempts and the total number of call attempts, both per time unit: B = (S Y ) Y (S Y ) (S Y ) Y . (S Y )

By denition, Y = A (1 C) and from (5.22) we have S = A (1 + )/. Inserting this we get: B = A(1 + ) A(1 C) A (1 C) A(1 + ) A(1 C) (1 + ) C 1+C q.e.d.

B =

From the last equation we see that for small values of the trac congestion C (1 + C 1) the trac congestion is Z (peakedness value) times bigger than the call congestion: C B =Z B. 1+ (5.50)

From (5.48) and (5.29) we get for Engset trac: Cn,S () < Bn,S () < En,S () . (5.51)

5.5

Evaluation of Engsets formula

If we try to calculate numerical values of Engsets formula directly from (5.28) (time congestion E), then we will experience numerical problems for large values of S and n. In the following we derive various numerically stable recursive formul for E and its reciprocal I = 1/E. When the time congestion E is known, it is easy to obtain the call congestion B and the trac congestion C by using the formul (5.45) and (5.46). Numerically it is also simple to nd any of the four parameters S, , n, E when we know three of them. Mathematically we may assume that n and eventually S are non-integral.

148

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

5.5.1

Recursion formula on n

From the general formula (4.27) recursive in n we get using x = (S x) and = / : Ex,S () =
x1 Ex1,S () x x1 + x Ex1,S ()

Ex,S () =

(S x+1) Ex1,S () , x + (S x+1) Ex1,S ()

E0,S () = 1 .

(5.52)

Introducing the reciprocal time congestion In,S () = 1/En,S (), we nd the recursion formula: Ix,S () = 1 + x Ix1,S () , (S x + 1) I0,S () = 1 . (5.53)

The number of iterations is n. Both (5.52) and (5.53) are analytically exact, numerically stable and accurate recursions for increasing values of x. However, for decreasing values of x the numerical errors accumulate and the recursions are not reliable.

5.5.2

Recursion formula on S

Let us denote the normalized state probabilities of a system with n channels and S1 sources by pn,S1 (i). We get the state probabilities of a system with n channels and S sources by convolving these state probabilities with the state probabilities of a single source which are given by {p1,1 (0) = 1 a, p1,1 (1) = a}. We then get states from zero to n + 1, truncate the state space at n, and normalize the state probabilities (cf. Example 5.2.1) (assuming p(x) = 0 when x < 0): qn,S (i) = (1a) pn,S1 (i) + a pn,S1 (i 1) , i = 0, 1, . . . , n . (5.54)

The obtained state probabilities qn,S (i) are not normalized, because we truncate at state [ n ] and exclude the last term for state [ n+1 ]: qn,S (n + 1) = a pn,S1 (n). The normalized state probabilities pn,S (i) for a system with S sources and n channels are thus obtained from the normalized state probabilities pn,S1 (i) for a system with S 1 sources by: pn,S (i) = qn,S (i) , 1 a pn,S1 (n) i = 0, 1, . . . , n . (5.55)

5.5. EVALUATION OF ENGSETS FORMULA

149

The time congestion En,S () for a system with S sources can be expressed by the time congestion En,S1 () for a system with S 1 sources by inserting (5.54) in (5.55): En,S () = pn,S (n) = (1a) pn,S1 (n) + a pn,S1 (n1) 1 a pn,S1 (n) (1a) En,S1 () + a n E () (S n) n,S1 , 1 a En,S1 ()

where we have used the balance equation between state [n 1, S 1] and state [n, S 1]. Replacing a by using (5.10) we get: n En,S1 () + S n En,S1 () En,S () = . 1 + En,S1 () Thus we obtain the following recursive formula: En,S () = S En,S1 () , S n 1 + {1 En,S1 ()} S >n , En,n () = an . (5.56)

The initial value is obtained from (5.15). Using the reciprocal blocking probability I = 1/E we get: S n In,S () = {In,S1 () a} , S > n , In,n () = an . (5.57) S (1a) For increasing S the number of iterations is S n. However, numerical errors accumulate due to the multiplication with (S/(S n) which is greater than one, and the applicability is limited. Therefore, it is recommended to use the recursion (5.59) given in the next section for increasing S. For decreasing S the above formula is analytically exact, numerically stable, and accurate. However, the initial value should be known beforehand.

5.5.3

Recursion formula on both n and S

If we insert (5.52) into (5.56), respectively (5.53) into (5.57), we nd: En,S () = S a En1,S1 () , n + (S n)a En1,S1 () n S n In1,S1 () + , Sa S E0,Sn () = 1 , (5.58)

In,S () =

I0,Sn () = 1 ,

(5.59)

which are recursive in both the number of servers and the number of sources. Both of these recursions are numerically accurate for increasing indices and the number of iterations is n (Joys, 1967 [62]).

150

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

From the above we have the following conclusions for recursion formul for the Engset formula. For increasing values of the parameter, recursion formul (5.52) & (5.53) are very accurate, and formul (5.58) & (5.59) are almost as good. Recursion formul (5.56) &(5.57) are numerically unstable for increasing values, but unlike the others stable for decreasing values. In general, we have that a recursion, which is stable in one direction, will be unstable in the opposite direction.

Example 5.5.1: Engsets loss system We consider an Engset loss system having n = 3 channels and S = 4 sources. The call rate per idle source is = 1/3 calls per time unit, and the mean service time (1/) is 1 time unit. We nd the following parameters: = a = 1 = 3 1 = 1+ 4 erlang erlang erlang (oered trac per idle source), (oered trac per source), (oered trac),

A = Sa=1 Z = 1 A 3 = S 4

(peakedness).

From the state transition diagram we obtain the following table:

i 0 1 2 3 Total

(i) 4/3 3/3 2/3 1/3

(i) 0 1 2 3

q(i) 1.0000 1.3333 0.6667 0.1481 3.1481

p(i) 0.3176 0.4235 0.2118 0.0471 1.0000

0.0000 0.4235 0.4235 0.1412 0.9882

i p(i)

(i) p(i) 0.4235 0.4235 0.1412 0.0157 1.0039

We nd the following blocking probabilities: Time congestion: E3,4 1 3 1 3 1 3 = p(3) = 0.0471 , AY 1 0.9882 = = 0.0118 , A 1
3

Trac congestion:

C3,4

Call congestion:

B3,4

= {(3) p(3)}

i=0

(i) p(i)

0.0157 = 0.0156 . 1.0039

We notice that E > B > C, which is a general result for the Engset case (5.51) & (Fig. 5.7). By

5.6. PASCAL DISTRIBUTION


applying the recursion formula (5.52) we, of course, get the same results: 1 3 1 3 1 3 1 3

151

E0,4

= 1, (4 1 + 1) 1 1 4 3 = , 1 7 1 + (4 1 + 1) 3 1 (4 2 + 1) 1 4 3 7 2 + (4 2 + 1) 1 3 (4 3 + 1) 1 2 3 9 3 + (4 3 + 1) 1 3
4 7

E1,4

E2,4

2 , 9 4 = 0.0471 , 85 q.e.d. 2

E3,4

2 9

Example 5.5.2: Limited number of sources The inuence from the limitation in the number of sources can be estimated by considering either the time congestion, the call congestion, or the trac congestion. The congestion values are shown in Fig. 5.7 for a xed number of channels n, a xed oered trac A, and an increasing value of the peakedness Z corresponding to a number of sources S, which is given by S = A/(1 Z) (5.25). The oered trac is dened as the trac carried in a system without blocking (n = ). Here Z = 1 corresponds to a Poisson arrival process (Erlangs B-formula, E = B = C). For Z < 1 we get the Engset-case, and for this case the time congestion E is larger than the call congestion B, which is larger than the trac congestion C. For Z > 1 we get the Pascal-case (Secs. 5.6 & 5.7 and Example 5.7.2). 2

5.6

Pascal Distribution

In the Binomial case the arrival intensity decreases linearly with an increasing number of busy sources. Palm & Wallstrm introduced a model where the arrival intensity increases o linearly with the number of busy sources (Wallstrm, 1964 [117]). The arrival intensity in o state i is given by: i = (S + i), 0 i n, (5.60)

where and S are positive constants. The holding times are still assumed to be exponentially distributed with intensity . In this section we assume the number of channels is innite. We

152
S

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY


.... .......... ............. .................. ...... ......... ..... .... .... ... ... .... .... ........... ..... ... .... .... ........... .... . .. ............... .... ........ ... .. . .. ... .. .. .. .. .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. . .. .. ... . . .. ... . . .. ... . . .. .... . ............... ............... .. .. ... ... ......... .... . .. . ... ... ........ ....... .... . .... ............... ............... .............. .............

(S +1)

(S +i2) (S +i1)

.... .... ............... ............... ................ .................. ...... ......... ...... ......... ... ... .. ........ ...... ... ........ ...... ... .... .... ........... ..... .... ........... ..... .. . .... . .. . . . . .. ... ... ... ... .. ... . .. .. .. .. .. .. .. .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. .. .. .. . .. .. . .. .... . .. . ... . .... ... . .... .... ... .. ... .. .... ... . ... ... . ........ ...... ... . ........ .... . .... ... ........ ...... ... ........ ....... . .. . ................ ................. ......... ...... ......... ...... ............... ........ ..... .

(S +i)

i1

(i1)

(i+1)

Figure 5.5: State transition diagram for Negative Binomial case with innite capacity. set up a state transition diagram (Fig. 5.6 with n innite) and get the following cut equations: S p(0) = p(1) , ... ...

(S + 1) p(1) = 2 p(2) , (S + i 1) p(i 1) = i p(i) ,

To obtain statistical equilibrium it is obvious that for innite number of channels we must require that < so that the arrival rate becomes smaller than the service rate from some state. All state probabilities can be expressed by p(0). Assuming = / < 1 and using: S i we get: p(1) p(2) ... p(i) = = = S p(0) (S + 1) p(1) 2 ... = p(0) = p(0) ... S 1 S 2 ()1 , ()2 , ... ()i , S+i1 = (1)i i = (S)(S 1) . . . (S i + 1) i! (5.62)

(S + i) p(i) = (i + 1) p(i + 1) , ... ...

(5.61)

(S +i1) S p(i1) = p(0) i i (S i) p(i) (i + 1) ... = p(0) ...

p(i + 1) = ...

S ()i+1 , i+1 ...

5.7. TRUNCATED PASCAL DISTRIBUTION The total sum of all probabilities must be equal to one: 1 = p(0) S 0 ()0 + S 1 ()1 + S 2 ()2 + . . .

153

= p(0) { + 1}S , where we have used the generalized Newtons Binomial expansion: (x + y) =
r i=0

(5.63)

r i ri x y , i

(5.64)

which by using the denition (5.62) is valid also for complex numbers, in particular real numbers (need not be positive or integer). Thus we nd the steady state probabilities: p(i) = By using (5.62) we get: p(i) = S +i1 ()i (1 )S , i 0 i < , < 1, (5.66) S i ()i (1 )S , 0 i < , < 1. (5.65)

which is the Pascal distribution (Tab. 3.1). The carried trac is equal to the oered trac as the capacity is unlimited, and it may be shown it has the following mean value and peakedness: A = S Z = , 1+

These formul are similar to (5.22) and (5.23). The trac characteristics of this model may be obtained by an appropriate substitution of the parameters of the Binomial distribution as explained in the following section.

1 . 1

5.7

Truncated Pascal distribution

We consider the same trac process as in Sec. 5.6, but now we restrict the number of servers to a limited number n. The restriction < is no more necessary as we always will obtain statistical equilibrium with a nite number of states. The state transition diagram is shown in Fig. 5.6, and state probabilities are obtained by truncation of (5.65): S ()i i
n

p(i) =

j=0

S ()j j

0 i n.

(5.67)

154
S

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY


.... .......... ............. .................. ...... ......... ..... .... .... ... ..... ... ... .... .... ... ........ ..... .... .... ... ... .... ............... . .. .... ........ ... .. . .. ... .. .. .. .. .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. .. .. ... . . .. .. ... .. . ... ... . .... . ............... ............... .. .. ... ......... ..... ... .. . ... ........ ....... ... .... . .... ............... ............... .............. .............

(S +1)

.... ............... ................ .................. ...... ......... ... ... ........ ...... .. ... .... .... ........... ..... .... .. . . . .. ... ... ... ... . .. .. .. .. . . . . . . . . . . . . . . . . . .. . .. .. .. .. .. . . .... ... . .... .... ... .. .. ... ... ......... ..... ... .... ... ........ ....... . .. ... ..... .. ................ ............... ........ ..... ...... ....

(S +i)

.... ............... ................ .................. ...... ......... ... ... ........ ...... ... ........ ... .... ........... .... ........... ..... .. . . .. . . . .. ... ... ... . ... .. ... . .. .. .. .. .. .. .. .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. .. . . .. . . ... .. ... . ... .. . .... ..... ......... ..... ...... .... .. .... . ... ... . ... ......... .... ... .... . ... .... ... ........ ..... ... ...... ........ ................ ................. ...... .... ....

(S +n1)

n1

(n1)

Figure 5.6: State transition diagram for the Pascal (truncated Negative Binomial) case. This is the truncated Pascal distribution. Formally it can be obtained from the Engset case by the the following substitutions: S is replaced by is replaced by S, . (5.68) (5.69)

By these substitutions all formul of the Bernoulli/Engset cases are valid for the truncated Pascal distribution, and the same computer programs can be use for numerical evaluation. It can be shown that the state probabilities (5.67) are valid for arbitrary holding time distribution (Iversen, 1980 [43]) like state probabilities for Erlang and Engset loss systems (insensitivity). Assuming exponentially distributed holding times, this model has the same state probabilities as Palms rst normal form, i.e. a system with a Poisson arrival process having a random intensity distributed as a gamma-distribution. Inter-arrival times are Pareto distributed, which is a heavy-tailed distribution. The model is used for modeling overow trac which has a peakedness greater than one. For the Pascal case we get (cf. (5.51)): Cn,S () > Bn,S () > En,S () . (5.70)

Example 5.7.1: Pascal loss system We consider a Pascal loss system with n = 4 channels and S = 2 sources. The arrival rate is = 1/3 calls/time unit per idle source, and the mean holding time (1/) is 1 time unit. We nd the following parameters when we for the Engset case let S = 2 (5.68) and = 1/3 (5.69): = a = 1 = , 3

1 = , 1+ 2 1 2 =1 3 . 2 erlang,

A = S a = 2 Z = 1 1 = 1+ 1

1 3

From a state transition diagram we get the following parameters:

5.7. TRUNCATED PASCAL DISTRIBUTION


i 0 1 2 3 4 Total (i) 0.6667 1.0000 1.3333 1.6667 2.0000 (i) 0 1 2 3 4 q(i) 1.0000 0.6667 0.3333 0.1481 0.0617 2.2099 p(i) 0.4525 0.3017 0.1508 0.0670 0.0279 1.0000 i p(i) (i) p(i) 0.3017 0.3017 0.2011 0.1117 0.0559 0.9721

155

0.0000 0.3017 0.3017 0.2011 0.1117 0.9162

We nd the following blocking probabilities: Time congestion: E4,2 C4,2 B4,2 1 3 1 3 1 3 = p(4) = 0.0279 . AY 1 0.9162 = = 0.0838 . A 1 0.0559 (4) p(4) = = 0.0575 . 0.9721 p(i)

Trac congestion:

Call congestion:

4 i=0 (i)

We notice that E < B < C, which is a general result for the Pascal case. By using the same recursion formula as for the Engset case (5.52), we of course get the same results: E0,2 E1,2 E2,2 E3,2 I4,2 1 3 1 3 1 3 1 3 1 3 = 1.0000 , 1 2 = , 2 5 1+ 3 1
2 3

2+
4 3

3 3

2 5

3 3

1 6

2 5

1 , 6 2 , 29 5 = 0.0279 179 q.e.d. 2

3+
5 3

4 3

2 29

1 6

4+

5 3

2 29

Example 5.7.2: Peakedness: numerical example In Fig. 5.7 we keep the number of channels n and the oered trac A xed, and calculate the blocking probabilities for increasing peakedness Z. For Z > 1 we get the Pascal-case. For this case the time congestion E is less than the call congestion B which is less than the trac congestion C. We observe that both the time congestion and the call congestion have a maximum value. Only the trac congestion gives a reasonable description of the performance of the system. 2

156

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

Congestion Probability [%] 16


. ... ... ... ... ... ... ... ... ... ... .. .. ... ... ... ... ... ... ... ... ... ... . .. ... ... ... ... ... ... ... ... ... ... . .. ... ... ... ... ... ... ... ... ... ... . . ... ... ... ... ... ... ... ... ... ... . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. ... ... ... ... ... ... ... ... ... .. ... ... ... ... ... ... .. ... ... .. .. ... ... ... ... ... ... ... ... . . ... ... ... ... ................................................... .................................................... .. ... ......................... ........................ .. ................. ................. ... ... ...... ...... ............. ... .......... ... .......... .. . ..... .. ........ .. ....... .. ....... .. ...... .. ...... .. ....... .. ................................................................................................................................... ............................................................................................................................................... . . .............. ...... . . . .... ... ... ... .. ........... .. ... . ........... ............ ..... ... .. ..... ... .. .... ... .. .... .... .. .... ...... ... .... ...... .... . ... . .... .... .. ... ... ... ... ... ... ... ... ... ... .. .. ... ... ... ... ... ... ... ... ... ... . . . .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. ... .. ... ... .. .. ... .. ... .. .. .. . .. ... .. .. .. .. ... .. .. .. ... ... .. .... .. ... .. .. . . ... .. ... ..... . . . ... .... . . .. ... .... .. ... .... .. .. .... .. .... .... .. .... ... ........... ........... ................... ..................

12

B E

0.0

0.5

1.0

1.5

2.0

2.5 Peakedness Z

Figure 5.7: Time congestion E, Call congestion B and Trac congestion C as a function of peakedness Z for BPPtrac i a system with n = 20 trunks and an oered trac A = 15 erlang. More comments are given in Example 5.5.2 and Example 5.7.2. For applications the trac congestion C is the most important, as it is almost a linear function of the peakedness.

5.8. BATCHED POISSON ARRIVAL PROCESS

157

5.8

Batched Poisson arrival process

We consider an arrival process where events occur according to a Poisson process with rate . At each event a batch of calls (packets, jobs) arrive simultaneously. The distribution of the batch size is a discrete distribution b(i) , (i = 1, 2, . . .). The batch size is at least one. In the classical Erlang loss system the batch size is always one. We choose the simplest case where the batch size distribution is a geometric distribution (Tab. 3.1, p. 92): b(i) = p (1 p)i1 , m1 = 2 = Zgeo = 1 , p 1p , p2 1p . p i = 1, 2, . . . . (5.71) (5.72) (5.73) (5.74)

The complementary distribution function is given by:


j=i

b( i) =

p(j) =

p (1 p)i1 = (1 p)i1 . 1 (1 p)

(5.75)

By the splitting theorem for the Poisson process the arrival process for batches of size i is a Poisson process with rate b(i). If we assume service times are exponentially distributed with rate , and that each member of the batch is served independently, then the state probabilities of the number of busy channels in a system with innite capacity has mean value and peakedness (Panken & van Doorn, 1993 [96]) as follows: 1 , p 1 . p

A = Z =

(5.76) (5.77)

The oered trac A is dened as the average number of batches per mean service time multiplied by the average batch size. The peakedness is greater than one, and we may model bursty trac by this batch Poisson model.

158

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

5.8.1

Innite capacity

If we assume balance in a cut between state [x1] and state [x] (x = 1, 2, . . .) we nd:
x

x p(x) =

i=1

p(xi) b( i)

(5.78)

= p(0) b( x) + p(1) b( x1) + . . . + p(x2) b( 2) + p(x1) b( 1) For a cut between states state [x1] and [x2] we have in similar way:
x1

(x1) p(x1) =

i=1

p(x1i) b( i)

(5.79)

= p(0) b( x1) + p(1) b( x2) + . . . + p(x2) b( 1) . As we from (3.49) have b( i+1) = (1p) b( i), then by multiplying (5.79) by (1p) the right hand side becomes identical with the right hand side of (5.78) except for the last term in (5.78). Observing that b( 1) = 1, we get: x p(x) p(x1) = (1p) (x1) p(x1) , p(x) = (1p)(x1) + x x p(x1) (5.80) (5.81)

We thus only need the previous state probability to calculate the next state probability. We may start by letting p(0) = 1, calculate the states recursively, and normalize the state probabilities in each step of the recursion.

5.8.2

Finite capacity

If we have a nite number of channels and the batch size is bigger than the idle capacity, then we may either accept as many calls as possible and block the remaining calls (partial-blocking) or we may block the total batch (batch-blocking). For partial-blocking we get the same relative state probabilities as above. This is similar to the classical loss systems where we may truncate the state space and re-normalize the state probabilities. For batch-blocking the balance equations become:
x1

x p(x) =

i=0

p(i) b(u | u n i) ,

0 < x n,

(5.82)

5.8. BATCHED POISSON ARRIVAL PROCESS

159

where the batch size b(u) to be accepted in state i now have an upper limit n i. By using (5.75) we get the balance equation
x1

x p(x) =

i=0

p(i) {1 (1 p)n+i } .

(5.83)

5.8.3

Performance measures

From the state probabilities we nd the time, call, and trac congestion in the usual way. The batch Poisson arrival process has the PASTA property, and therefore the time, call, and trac congestion are equal. The trac congestion is obtained from the state probabilities in the usual way:
n

=
i=0

i p(i) ,

(5.84)

C = where the oered trac is given by (5.76). If we rewrite (5.80) we get:

AY , A

(5.85)

x p(x) = {(1p)(x1) + } p(x1) . For a Pascal trac process we have: x p(x) = (S +x1) p(x1) . Equalizing the right hand sides we get for the factors to (x1): (1 p) = , Z 1 = = (1 p) = , Z (5.86)

where we have used (5.77). For the constant factors we get, exploiting (5.76) and (5.77): = S, S = A A = p= , Z 1 (5.87)

which is in agreement with the Pascal case (Z > 1). So if we have a Batch geometric arrival process with mean value A = /( p) (5.76) and peakedness Z = 1/p (5.77), then we get an

160

CHAPTER 5. LOSS SYSTEMS WITH FULL ACCESSIBILITY

equivalent Pascal stream by choosing as (5.86) and S as (5.87). Thus the Batch Poisson Process is identical with a Pascal trac stream. (The following will be elaborated further) For partial-blocking (pb) in the Batch Poisson process we have Epb = Bpb = Cpb , whereas the equivalent Pascal model get the same trac congestion Cpas = Cpb , but smaller values of call congestion Bpas and time congestion Epas . For batch-blocking (bb) and a single trac stream time congestion becomes: Ebb = Ebb Z Ebb /p = Ebb Z . 1 + Ebb (1/p 1) 1 + Ebb (Z 1) (5.88)

This is close to the trac congestion for the Pascal model, as the trac congestion is approximately proportional to the time congestion times the peakedness.
Updated: 2010.03.04

Chapter 6 Overow theory


In this chapter we consider systems with limited accessibility where trac blocked from a primary group of channels overows to secondary groups. Both carried trac and overow trac have properties dierent from pure chance trac (PCT), and therefore we cannot use the classical trac models for these streams. In Sec. 6.1 we describe a typical problem from telecommunication networks, where we use limited accessibility both for service protection and for saving equipment. The exact solution by state probabilities is dealt with in Se. 6.2. This approach is only possible for very small systems because of the state space explosion. Only for Erlangs ideal grading are we able obtain a solution for any values of the parameters. For real systems we have to use approximate solutions or computer simulations. Approximations are either based on state space (Sec. 6.3 Sec. 6.6) or time space (Sec. 6.7). In Sec. 6.3 we describe the carried trac and the lost trac by mean value and variance (or peakedness). Then we assume that two trac streams which have same mean and variance are equivalent, thereby looking away from moments of order higher than two. For a given mean and variance of overow trac we are able to nd an Erlang loss system (dened by oered trac and number of channels) which has the same mean and variance. This is exploited in the ERT-method (Sec. 6.4) which is the method most used in practice. Fredericks & Haywards method (Sec. 6.5) is applicable for both smooth and bursty trac, and easy to apply. It uses a simple transformation of the the parameters of Erlangs loss model, and is based on an optimal splitting of the trac process. Other state-based methods are described in Sec. 6.6. In particular, the method based on the BPP modeling paradigm, using trac congestion, is of interest. Methods based on time space are in general more complex. State space based methods based on Erlangs loss model only allows for two parameters (mean and variance). Methods based on time space allows for any number of parameters. In Sec. 6.7 we describe the application of interrupted Poisson processes and Cox-2 distributions. They both have three parameters. Using general Cox distributions or Markov modulated Poisson processes MMPP more parameters are available.

162

CHAPTER 6. OVERFLOW THEORY

6.1

Limited accessibility

In this section we consider systems with restricted (limited) accessibility, i.e. systems where a subscriber or a trac ow only has access to k specic channels from a total of n (k n). If all k channels are busy, then a call attempt is blocked even if there are idle channels among the remaining (nk) channels. An example is shown in Fig. 6.1, where we consider a hierarchical network with trac from A to B, and from A to C. From A to B there is a direct (primary) route with n1 channels. If these channels all are busy, then the call is directed to the alternative (secondary) route via T to B. In a similar way, the trac from A to C has a rst-choice route AC and an alternative route ATC. If we assume the routes TB and TC are without blocking, then we get the accessibility scheme shown to the right in Fig. 6.1. From this we notice that the total number of channels is (n1 + n2 + n12 ) and that the trac AB only has access to (n1 + n12 ) of these. In this case sequential hunting among the routes should be applied so that a call only is routed via the group n12 , when all n1 primary channels are busy.

B
n1 n1

A A
n12

T A C
n2 n12

n2

C
Figure 6.1: Telecommunication network with alternate routing and the corresponding accessibility scheme, which is called an ODellgrading. We assume the links between the transit exchange T and the exchanges B and C are without blocking. The n12 channels are common for both trac streams.

It is typical for a hierarchical network that it possesses a certain service protection. Independent of how high the trac from A to C is, then it will never get access to the n1 channels. On the other hand, we may block calls even if there are idle channels, and therefore the utilization will always be lower than for systems with full accessibility. However, the utilization will be bigger than for two separate systems with the same total number of channels. The common channels allows for a certain trac balancing between the two groups. Historically, it was necessary to consider restricted accessibility because the electro-mechanical systems had very limited intelligence and limited selector capacity (accessibility). In digital systems we do not have these restrictions, but still the theory of restricted accessibility

6.2. EXACT CALCULATION BY STATE PROBABILITIES is important, both in network planning and in guaranteeing a certain grade-of-service.
..... .......... .......... ........... .... ..... .. .. ... ... .. ... .. .. .. ......................................... . .. .. . . . . .. . ....................................... . . . . . . . . . . . .. . .. . . . . . ... ... . . .................................... . . .. .. . .. . ....................................... . .. ... .. .. .. . . . . . .. .. ... .. ... .. .. .. ... .... . . . . .. .. ... ... ..... ............. .. .. .. .. ................ .. ... ....... ... .. .. ............. .... ... . . .. ... .. . .. ... .. .. .. .. .. ... .. ... .. .. .. .. ... .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. . .. . .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. . . . .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. ... .. .. . .. . . . .. .. .. .. .. .. ... ... .. .. .... .... .. .. .. .. . . . . .. ... . .. .. .. .. .. .. .. ... ... .. . .. . . .. .. . . .. .. .. ..... ....... .. ...... ....... .. .. .. .. .. . . .. .. . .. .. ... .. .. .. .. .. .. ... .. ... .. .. . . . .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .... .. . . . .. .. .. .. .. .. .. .. ... .. .. .. .. ... .. ... .. .. . . .. . .. .. .. .. .. .. .. .. .. ... .... .. .. .. .. .. ... .. . .. .. .. . . . ... .. .. . .. ... .. .. . .. . ..... .. .. .. ... ..... .. .. .. ........ ... .... . . . .. ... ... .. . .. .. . .. .... ..... .......... ............. ... ....... .... ... ..... . ... ............. .... ....... ........ .. . . .. . . . .. . .. .. .. .. .. . .. .. .. .. .. .. .. .. ... . .. ... .. .. .. .. .. . . . . .. . . . . ....................................... . . . ... . .. ...................................... ... . . . . . . . . . . . . . . . ..................................... . . . . . . . . . . . . . . . . . . . ..................................... . .. . . . .... . .. . .. . ..... . .................................... . . . . ... . . .. .. ...................................... . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. ... .. .. .. ... . . . ... ... .. ... ... .. . ..... ....... .. ........ ....... ..... ...... . ................ ...... ....... .. ................ ......... . ......... .. ......... . ........ .. .. . .. .. . .. ... .. ... .. .. ... .. .. .. .. .. .. .. .. ... .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .... .. .. ... .. .. .. .. .. .. .. .. .. .. .... . .. .. .. .. .. .. .. . .... .... .. .. .. .. ... .. . .. .. .. .. .. .. . .. .. .. . .. .. .. .. ... .. .. .. .. .. .. .. .. . .. . . .. .. .. ....... . .. .. .. .. .. .... ... .. .. .. .. .. .. .. .. .. . .. .. .. . .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. . .. . . .. . .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. ... .. .. .. .......... ... .. .. ...... ... .... .. ... ....... ... .. . . . ... .... .. .. .. .. .... .. .. ......... ......... ... ..... .. . .. .. ....... .. .. .... .. . .. .. .. .. . ........................................ .. .. . . .. . ... .. . . . . . . . ...................................... . ... .. . . .. .. .. . .. . . . . . . . . . . . . . . . . . .. . .. ........................................ . .. . .. ......................................... .. . . ... ... ... ... .. .. .. .. ........... .......... .......... ..........

163

13

12

123 1

23

Figure 6.2: State transition diagram for a small ODell grading (Fig. 6.1) with n = 3 channels, n1 = n2 = n12 = 1 (accessibility k = 2), ordered hunting, and oered trac A = , equally distributed between the two groups (mean service time = time unit). The detailed state transition diagram has 8 states. We specify the state of each channel. The state probabilities can only be obtained by setting up all 8 balance equations (7 node equations and a normalization condition) and solve these linear equations.

6.2

Exact calculation by state probabilities

The problem of evaluating systems with limited accessibility is due to the state space explosion. The number of states is in general so large that problems become intractable.

6.2.1

Balance equations

To have full information about the state of the system it is not sucient to know how many channels are busy, we should also know which group a busy channel belongs to. Thus for the system in Fig. 6.1 the number of states will be (n1 + 1))(n2 + 1)(n12 + 1). In worst case we have to specify the state of each channel and thus get 2n states. For a very small ODell grading with n1 = n2 = n12 = 1 we get 8 states as shown in Fig. 6.2. For real systems the number of states becomes very large, and it is not convenient to nd state probabilities from balance equations, or nd performance measures from state probabilities. Only Erlangs ideal grading has a simple and general solution.

164

CHAPTER 6. OVERFLOW THEORY

6.2.2

Erlangs ideal grading

Erlangs ideal grading (EIG) is the only system with limited accessibility, where the exact blocking probability can be calculated for any value of number of channels n, accessibility k, and oered trac A. It is also named Erlangs interconnection formula (EIF). The grading is optimal in the sense, that it can carry more trac than any other gradings with random hunting and the same parameters. A small grading with sequential or intelligent hunting can sometimes carry a little more trac. In this case (EIG) is very close to the optimal value, and the great importance of Erlangs ideal grading is, that it can be used as optimal reference value for the utilization, which can be obtained in practice for any grading. Historically, there has been many misunderstandings about EIG, and numerically it has been dicult to evaluate the formula without computers. However, it is a model of basic theoretical interest. It can be shown that Erlangs ideal grading is insensitive to the holding time distribution. In our terminology we consider PCTI trac oered to n identical channels. Each time a call attempt arrives, it chooses k channels at random among the n channels, and seizes an idle channels among these k channels, if there is any. If all k channels chosen are busy, the call attempt is lost. In order to implement this grading in an electromechanical system we divide the trac into g inlet groups. By random hunting the number of inlet groups is: grt = n k (6.1)

(or a whole multiple of this). This is the number of ways we can choose k channels among n channels. Each channels will appear in all possible dierent combinations of the other channels. By ordered hunting the number of inlet groups becomes: goh = n k! k (6.2)

(or a whole multiple of this). By ordered hunting a hunting position of a channel is important, and we ensure therefore that all possible permutations of the k hunting positions occur once. In a digital stored-program-controlled (SPC) system, we do not construct these groups, but by random numbers we may choose k channels at random, and thus construct a random group when needed. In Fig. 6.3 a realization of Erlangs ideal grading is shown.

State probabilities Under the above mentioned assumptions we get a system where all channels are oered the same trac load, and therefore all have the same probability of being occupied at an arbitrary

6.2. EXACT CALCULATION BY STATE PROBABILITIES Group 1 2 3 4 5 6 7 8 9 10 11 12


. ................... ................... . ..

165

Combination
1 . . 1 1 2
. .. ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . ... . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. ... . .. . .. . . .. . . .. . . . . . .. . .. . . . . .. . .. . . . . . . .. . . .. . . . . .. . . .. . . . . .. ... ... . . . .. . . .. . . . .. . . .. ... . . .. . . . . . .. . . .. ... .. . . . . . .. . . . . . . .... . ..... . . . . . .. . . .. .. . . . . .. . . .. . ... . . .. .. .. . . .. . . .. .. . .. . .. . . . . .. . . . .. .. . . .. . . .. . . .. . . . .. . . .. .. ... . . . . . . . ... .. .. . . .. . . . . . . ... ... ... .... .. .. . .. ... . . .. .. . .. . . . .. .. . .. .. . . .. .. . .. .. . . .. .. . .. .. .. .. . .. .. .... ... . . . .. . . .. .. . . . ..... . ... . . . .. . .. . . .. . . .. .. . . .. ... . .. .. . . . .. . .. . . .. . .. . .. . .. .. . .. . . . . ... . .. .. .. . . . . .. . .. .. . . . . . .. .. . .. . ... . . . . . ... . . .. ... . . . .. . . ..... . . . . ... ... .. .. . . .. . . ... .. . . .. .. . .. . . .. . . .. . .. .. . ... . .. . . .. . . . . .. . .. . .. . . .. . ... . .. . .. . . .. . . .. .. . . .. . .. . . .. . .. . . . .. .. . . .. . .. . . .. . .. . . .. . . . . . . ..... .... .. .. . .. . .. . .. . .. . .. . .. . . .. .. . . .. .. . . .. . .. . .. .. .. .. .. .. .. .. . . .. .. . . .. . .. . .. . .. . .. . .. . . .. .. . . .

2 3 4 3 4

12 13 14 23 24

. . ................... ................... ..

. ................... ................... ..

. ................... ................... ..

. ................... ................... ..

4 3 34 . ................... ................... .. 2 1 21
. ................... ................... .. .. . ................... ...................

3
... . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ... .

1 1 2 2 3

31 41 32 42 43

.. ................... ................... .

4 3 4 4

.. ................... ................... .

.. .................. .................. . .

.. ................... .................. . .

Figure 6.3: An example of Erlangs ideal grading (Erlangs interconnection formula) (EIG) with n = 4 channels and k = 2. By random hunting the physical grading has gts = 6 groups (upper part), by sequential hunting the grading has gos = 12 inlet groups. The oered trac is distributed among the groups, so all groups receives PCT-I trac with same intensity.

point of time. By exploiting this symmetry it is possible to set up the state equations under the assumption of statistical equilibrium in the such a way that we obtain the same advantages as in a full accessible group, where a state is uniquely determined by the total number of busy channels. Fig. 6.4 shows the state transition diagram of an EIG with the same parameters as the ODell grading in Fig. 6.2. The state transition diagram is reversible and has local balance (Sec. 7.2). These are properties we consider further in connections with multi-dimensional loss systems and networks (Chap. 7). For a call that arrives when i channels are busy, the blocking probability is equal to the probability that all k channels chosen at random are among the i busy channels. For i < k

166
2

CHAPTER 6. OVERFLOW THEORY


.......... ........... ........... ............ .. .. .. ... ... .. ... ... .. . . .. . ......................................... . . . ....................................... . . . . . . . . . . . . . . . . . . . . .. .... .... . ...... .. ...................................... ... . . . . . .. .. . ... .. . . .. .. ......................................... .. ... .. . . . .. .. ..... . .... ......... ... ..... .. ........ .......... ........ .......... . . ... .. . .. ... .. . .... . . . .. . ... .. .. ........ .. ... ..... .. .. . ... . .. .. ... .. .. . .. ... .. .... .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . ... .. . . . .. .. ... .. ... .. .. .. ... .. .. .. .. ... .. .. .. . .. . . . .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. . .. .. . . . .. .. ... .. .. .. ... .. .. .. .. .. ... .. .. .. .. .. . .. .. .. .. . . .. .. .. .... .. .. .. .. .. .. . ... .. .. .. .. . .. .. . . .. .. ...... .... .. .. .. .. .. ... ... ... .... .. .. . . .... .... .. .. .. ... . ... .. .. .. .. .. . . . .. . .. .. .. .. .... .... .... .. .. .. ........ .... .. .. .. . . .. . .. .. .. .. .. .. .. .. ... ...... .... .. .. .. .. .. . . .. .. .. ... .... ... .... .. .. .. .. ... ... ... .. .. . . . .. .. ... .... .. .. .. .. .. .. .. .. .. ... ... .. . . ... .. . .. .. .. ... .. .. .. .. .. . .. .. ... .. ... .. . ... .. . .. .. . .. . . .. ........ . .. ... .......... .. ... ............... ............. ......... ... ............. ..... .......... .... .. ............... . . . . ... .. ...... .. ... .. .. . .. .... .. ... .. .. . . ... .. ... . .. ... . ... .. .. . .. ......................................... ......................................... .. ... .. . ........................................ . . .. . .. . . ... .. .. . . ...................................... .. . .. .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . .. ....................................... .... ....................................... . . . .... .. ..... ........................................ . .... ........................................ . . . . . .. .. .. .. ... .. .. ... ... .. .. .... ... .. ... ... .... .. .. .... .. ... . . . . ............ .. .............. ... .......... .. ............. .. ... ........... . .. ... .. ......... . . . .. ... .. ........ ... .. ......... .... ... . . ... .. .. .. . .. .. ... ... .. .. . ... .... .. ... .. ... .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. .... .. .. .. .. .. .. .. .. .. .. .... .. .. .. .. .. .. .. .. .. .. ... .. .. .. ... ... .. .. ... .. .. .. .. . .. .. .. .. .. .. .. .... .. .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. ... .. .. .. .. .. ..... .. .. .. .. .. .. .. .. ... ... ... .. .. .. ... ...... .... . .. .. .. .. .. .. .. .. .. . .. ... .. ..... ... .. .. .. .. . .. . .. .. .. .. .. .. .. .. .. .. ... .... .... .. .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. .. .. .. ... ... .. .. .. .. .. .. .. .. .. .. .. . .. .. .. . . ... .. .. .. .. .. ... .. ... .. .. .. ... .. .. .. .. ... ... .. .. ... .. .. ... ... ...... .... . .. .. .. .. . .. .. ........... ... .. .. ............ .... ... ... .... .... .... .. .. .. ......... ......... .. ... .. .. . .. . .. .... .. ... .. .. ... .. ... ...... ...... .. . . . . ................................... . . ... . .. . . ................................. . . ... . ..... .. . .. . . . . . . . . . . . . . . . . . . ...................................... . . ....................................... . .. . .. .. .. .. .. .. .. .. .. ... . ... .... .... ..... ...... ........... ........... .... ..

13

2 3

12

2 3

123 1

2 3

23

.... ..... ..... ..... .... . .......... .......... .......... .... . ..... .... ..... .... ..... .... ..... .... ..... .. .. .. ... ... ... .. .. .. .. .. .. .......................................... .......................................... .......................................... .. .. ... . ... .. .. . . . . . . . . . . .. .. . ...................................... . ..................................... ...................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. .................................. ..................................... . .. ................................... ....................................... ...................................... . ....................................... . . . . . . . .. .. .. .. .. .. .. . .. . .. . .. .. . .. . .. .. . .. .. . ... ... . . . ... ... ... . .... .... ..... .... ...... ... ....... .... ............ ...... . ... ... .......... ........ ...... ..... ....

2 3

Figure 6.4: State transition diagram for Erlangs Ideal grading with n = 3 channels, accessibility k = 2, and oered trac A = (mean service time = time unit). The detailed state transition diagram has 8 states, and there is local balance. The state is a list of individual busy channels. Due to symmetry, the detailed state transition diagram can be aggregated into a one-dimensional state transition (shown in the lower part of the gure) with the same number of states as a full accessible group. no calls are lost. For k i n the blocking probability for a call attempt becomes: bi =
i k n k

k i n.

(6.3)

i For i < k this is also valid, as we by denition have k = 0 for i < k. The denominator is the number of dierent ways we can choose k channels among n channels. The numerator is the number of times all k channels chosen are busy.

We look for the steady state probabilities p(i) of the system. The cut ow balance equation between state i 1 and i is: (1 bi1 ) p(i 1) = i p(i) . Thus we get: p(i) = (1 bi1 ) (1 bi2 ) (1 b0 ) ... p(0) i (i 1) Ai p(0) , i!

= Qi

6.2. EXACT CALCULATION BY STATE PROBABILITIES where Qi =


j=0

167

i1

(1 bj ) ,

i = 1, 2, . . . , n ,

Q0 = 1 .

(6.4)

The steady state probabilities become (Brockmeyer, 1948 [12]), pp. 113119): p(i) = Qi
Ai i! Aj j!

n j=0

Qj

i = 0, 1, . . . , n .

(6.5)

A call is blocked in state i with probability bi , and the total blocking probability of Erlangs ideal grading becomes:
n

E =
i=0

bi p(i) ,

(6.6)

E =

n Ai i=0 bi Qi i ! n Ai i=0 Qi i !

(6.7)

Due to the Poisson arrival process (PASTA-property) We have E = B = C. For k = n we obtain Erlangs B-formula (4.10), since bi = 0 for i < n and bn = 1.

Upper limit of channel utilization In a trunk (= channel) group there is correlation between the trac carried by two dierent channels. On the average each channel carries the trac y. The probability that a single channel is busy equals y. However, he probability that two channels chosen at random are busy at the same time is not y 2 due to the correlation. Only when the channel group is very big, the correlation between the carried trac on two channels becomes small. If n becomes very big and k is limited (k n), then the congestion becomes: Ey =
k

A(1 E) n

n.

(6.8)

as y = A(1 E)/n is the carried trac per trunk (channel). It can be shown, that (6.8) is the theoretical lower bound for the blocking in a grading with random hunting and hunting capacity k. The utilization per trunk has therefore the upper bound: A(1 E) lim y = = E 1/k < 1 . (6.9) n n Notice that this bound is less than one and independent of n (Fig. 6.5). This formula gives a linear relation between the carried trac A(1 E) and the number of the channels n, i.e. a xed carried trac per channel.

168 Trac per channel y 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

CHAPTER 6. OVERFLOW THEORY Accessibility k n

. ............. .............. .. . .. . .. . . . . . .. . . . . . . . . . . . . . . .. .. ............. ............ ................... .................. .. .... ................ ................ .............. .............. ............ ............ ........... ........... . .. .... ..... .......... .... ............... .................... ........ ........ ............................ . ....... ................... ........ ......... ............................... ............................... .. .... ....... ........................ ........................ ....... ....... .................. ................. .. .. ....... ....... ...... ............. ...... ............. . . ..... ... .............. ..... ................. .. .. .. ..... .... .......... .... ......... ......... ........ ... ... . ............... .... . .............. .... .............................. .... ...................................... ... .... .................................................... .................................................. ... ... ............................. ............................. ... ... ........ .......... .................. ... ... ............... ............... . ... ..... .. ......... ... ......... ... ....... ....... ... ...... ... ......... .. . . .. ...... .. ...... .. ...... . .. .. .. .... ...... ..... .. .... .. ... .. .. . .. .. . .................................... ................................................. . .. ............................................................................................................... ..................................................................................................... . ................................ .............................. .. .............. .............. . . ........ . ........ ... . . .......... . ......... . . . . ... . .. . ... . ... . .. .. .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ......................... ............................................................................................................................................................................... . ........................................................................................................................................................................................................................... . . ....................... . ...... .... .. . . . . . . . . . . ..

E=0.01

n 20 10

20 10

10

20

30

40

50

60

70 80 90 Number of channels n

Figure 6.5: Carried trac y per channel as a function of the number of channels for Erlangs ideal grading with xed blocking (E = 0.01). k = n corresponds to a full accessible group. For xed value of k, the carried trac per channel y has an upper limit, which is obtained for n (6.9). This upper limit is indicated to the right.

6.3

Overow theory

Classical trac models assume that the trac oered to a system is pure chance trac type one or two, PCTI or PCTII. In communication networks with alternate trac routing, the trac blocked from the primary group is oered to an overow group, and this overow trac has properties dierent from PCT trac as discussed in Sec. 3.7. Therefore, we cannot use the classical models for evaluating blocking probabilities of overow trac.
Example 6.3.1: Group divided into two Let us consider a group with 16 channels which is oered 10 erlang PCTI trac. By using Erlangs Bformula we nd the lost trac: A = A E16 (10) = 10 0.02230 = 0.2230 [erlang] . We now assume sequential hunting and split the 16 channels into a primary group and an overow group, each of 8 channels. By using Erlangs Bformula we nd the overow trac from the primary group equal to: Ao = A E8 (A) = 10 0.33832 = 3.3832 [erlang] . This trac is oered to the overow group.

6.3. OVERFLOW THEORY


Applying Erlangs Bformula for the overow group we nd the lost trac from this group: A = Ao E8 (Ao ) = 3.3832 0.01456 = 0.04927 [erlang] .

169

The total blocking probability in this way becomes 0.4927%, which is much less than the correct result 2.23%. We have made an error by applying the Bformula to the overow trac, which is not PCTI trac, but more bursty. 2

In the following we describe two classes of models for overow trac. We can in principle study the trac process either vertically or horizontally. By state space (vertical) studies we consider the state probabilities (Sec. 6.3.16.6.3). By time space (horizontal) studies we analyze the interval between call arrivals, i.e. the inter-arrival time distribution (Sec. 6.7).
. ......................................................................................................................................................................................................................................... . . ........................................................................................................................................................................................................................................ . . . . . . . . . . . . . . . . . . . . . . ..... ..... ..... ..... . . . ............... ............... ................ ................ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .................... ................... . .................... ................... . .................... ................... . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . ................... ................... .. .................... ................... . .................... ................... . .................... ................... . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....................................................................................................................................................................................................................................... ........................................................................................................................................................................................................................................ ... .

n Kostens system n Brockmeyers system n Schehrers system

Figure 6.6: Dierent overow systems described in the literature.

6.3.1

State probabilities of overow systems

Let us consider a full accessible group with ordered (sequential) hunting. The group is split into a primary group with n channels and an overow group with innite capacity. The oered trac A is assumed to be PCTI. This is called Kostens system (Fig. 6.6). The state of the system is described by a two-dimensional vector: p(i, j), 0 i n, 0 j , (6.10)

which is the probability that i channels are occupied in the primary group and j channels in the overow group at a random point of time. The state transition diagram is shown in Fig. 6.7. Kosten (1937 [76]) analyzed this model and derived the marginal state probabilities:

170

CHAPTER 6. OVERFLOW THEORY . . . . . . . . . .. .. . . . ... . . . . . . . . . . . . . . . . . . . j +1 j +1 j +1 ..... ........... j +1 . . . . . . .................................................................................... . ... . . ... . . . . . . . 0j 1j 2j . ........................................................ n j ......... . . ... . . . n ........................................... 1 2 3 . . . . . . . . . . . j j j ......... ........... j . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 .2 .2 . . . . ...... ........... 2 . . . . . . . . . . .. ... .. . . . . . . .. .. .. . .. .. .. . .. . . . ... ........... . .... .... ... .. .......... .... .. .. ... ... .. ................................................................................................................................................. . ......................................................................... .. . ........................ .. ... ... ... .. ... ... . ..................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 1 . . 1 1 . . . . 2 1 . . . . . ...................................................... n 1 ........ . . . . . . . . . . . .. . . . .. . . .. . . ....................... ...... ................. ...................... ......................... ......................... ...................... ... ... . . . ... ... ... ... . .... ..... ....... ... ... ... ... ............ ... ... ......... ......... ........ ......... ... ... . .. . . . n .......................... . . . 1 2 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 .1 . .1 . . . . . ......... ...........1 . . . . . . . . . .. . .. .. .. . . . .. .. .. . .. .. .. . .. .. . .... .. .. ... . . .... ..... ..... ...... ... ... ......................................................................................................................................................... . ............................................................................. .. .. ... . .. . ... ... ... .. ... ... ....................... . ..................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 . . . . . 1 0 . . 2 0 . . . . . . ...................................................... n 0 ......... . . . . . . . . .. ....................... ...................... ..................... .. . . . .. . .. . .. . ........................ ........................ ...................... . . . ... ... ... ... ... ... . .... .... ... .. .. ... ... ... ... ... ......... ......... ......... ......... ........ ....... . n ................... 1 2 3 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . . .. .. .. . . . .. . . . . . . ....... ....... ....... ........ ........ ........ .. .. .. ... ... ... .. .. .. .. .. .. ... ... ... ... ... ... ....................... ........................ . ........................ ...................... ....................... . . .. .. . . . . ...................... .. .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .................... ..................... .................... . .. ...................... ...................... ..................... . ... . .. .. ... . . .. ... . ... . ... ... ... ... .. . .. . ... ... .... ..... ............ ........... ........... ........... ........... ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . .. .. .. . . .. . . . . . . .

Figure 6.7: State transition diagram for Kostens system, which has a primary group with n channels and an unlimited overow group. The states are denoted by [i, j], where i is the number of busy channels in the primary group, and j is the number of busy channels in the overow group. The mean holding time is chosen as time unit.
j=0 n

p(i, ) =

p(i, j),

0 i n,

(6.11)

p( , j) =
i=0

p(i, j),

0 j < .

(6.12)

Riordan (1956 [102]) derived the moments of the marginal state probability distributions of the two groups. Mean value (carried trac) and peakedness (= variance/mean ratio) become: Primary group: m1,p = A {1 En (A)} , Zp = vp = 1 A {En1 (A) En (A)} m1,p (6.13) (6.14)

= 1 Fn1 (A) 1 , where Fn1 (A) is the improvement function of Erlangs B-formula.

6.4. EQUIVALENT RANDOM TRAFFIC METHOD Secondary group = Overow group: m1 = A En (A) , Z =

171

(6.15)

v A = 1 m1 + 1. (6.16) m1 n + 1 A + m1 For a xed oered trac, Fig. 6.8 shows that the peakedness of overow trac has a maximum for an increasing number of channels. Peakedness has the dimension [channels]. In practice we estimate the oered trac by measuring the carried trac. The peakedness is not measured, but used when dimensioning networks by the above theory. For PCTI trac the peakedness is equal to one, and the blocking probability is calculated by using the Erlang-B formula. If the peakedness is less than one (6.14), the trac is called smooth and it will experience less congestion than PCTI trac. If the peakedness is larger than one, then the trac is called bursty and it experiences larger congestion than PCTI trac. Overow trac is usually bursty (6.16). Brockmeyer (1954 [11]) derived the state probabilities and moments of a system with a limited overow group, which is called Brockmeyers system (Fig. 6.6). Bech (1954 [6]) did the same by using matrix equations, and obtained more complicated and more general expressions. Brockmeyers system is further generalized by Schehrer who also derived higher order moments for successive nite overow groups (Fig. 6.6). Wallstrm (1966 [118]) derived state probabilities and moments for overow trac of a geno eralized Kosten system, where the arrival intensity depends either upon the total number of calls in the system (Engset model), or the number of calls in the primary group only.

6.4

Equivalent Random Trac Method

This equivalence method is called the Equivalent Random Trac Method (ERTmethod = ERM), Wilkinsons method, or Wilkinson-Bretschneiders method. It was published same year in USA by Wilkinson (1956 [119]) and in Germany by Bretschneider (1956 [8]). It is a moment-matching method, approximating the rst two moments of the state probabilities of an unknown trac process with the rst two moments of overow trac from Erlangs loss systems. It plays a key role when dimensioning telecommunication networks. (EART is an erroneous name for ERT in Cisco literature!).

6.4.1

Preliminary analysis

Let us consider a group with channels which is oered g trac streams (Fig. 6.9). The trac streams may be trac which is oered from other exchanges to a transit exchange,

172

CHAPTER 6. OVERFLOW THEORY

3.00 2.75 2.50 2.25 2.00 1.75 1.50 1.25 1.00

Z
................. ................... ........ ....... ..... ..... ..... .... ..... .... .... .... .... .... ... .. .... .... . .. ... ... . . ... ... . . ... ... .. ... ... ... ... ... ... ... ... ... . ... ... ... ... .. .. .. .. .. . .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. . .............. . ................. ....... ..... ... ..... ... ...... .... .... .... .... ... .. . .... .. ....... ... .. ...... ... .. ... ... ... .. ... . ... .. ... .. .. .. ... ... ... ... .. ... ... .. ... ... ... ... .. .. .. ... . . ... ... .. .. .. ... .. ... . . ... .. .. ... .. .. . . ... ... .. .. .. .. ... . ... . ... .. .. ... .. .. . ... ... .. .. .. .. ... . ... .. .. ... .. .. ... . . ... ... .. .. .. .. . ... ... .. .. .... .. ... ... . . ... .... ... .. .................. .. ................... . . . .... .... .... .... ... .... ... ... .... .. . .... . . .... .... .... . ... .... .... .... .... ... . . . . ... ... .... .... .. ... ... .... ... .... ... .. . .... .... .... ... .. ... .. ... ... .... .... .. . .... .. .... ... .... ... ... ...... .. .. . . . . ..... ..... ... ... .. .. ... .. ... .. ..... ..... . . ... ... ..... .. .. ..... ... ... .. ... .. ... . ..... ..... .... .... .. .. ... .. ... .. ...... . ...... . . .... .... .. .. ...... ...... ... .. .. ... .... .... . . .. ... .... .. .... ... .. .. . . . .... .... .. .. ... ... .. .. . . .... . .... .. .. ... ..... ..... ... .. ... ... . . ... ..... ..... ..... ... .... ... . ... ..... ............ ..... ... ..... .. .... . ... ... .... .... . ...... ...... ... .. .... .. .... .... ... .... ...... ...... .. .... ...... .. . .. ........ .. . ....... ....... .. .. ... . .... .. ...... ........ .. ... ........ . . . .... ........ .. .. .... ........ .. ... .. .. .. ... . . .. ..... .......... .......... ..... .. .. .. ... .. .. .. ... .......... .......... ..... ..... . .. ............ ............ .. ........... . ..... ..... . .. . ... ................... ... .............. ............. ...... ...... .... ...... .... . .... ..... .. .. ...... ..... ................ ................ ....... ....... . .. .. . ...... ..... ........ ...... .. . ........ ......... .......... ..... ..... . . . . .......... .......... ...... .. .. ...... .... .. .. .... .... .. ............ ............ . . ....... ....... ............... .... .... ............... .... .... .... ......... .... ......... . . ................... ................... ........... .... .. ........... .... ......................... .... ..... ....... .......................... . ............... ............... . .. .................................... ..................................... . . ...................... .. . .... ..... ....................... .............................. ........................... .... .... ...... ..................................... ...................................... .... ... .... ...................................................................... ........................................................................ . ... .. ... . . .. .... ....................... .................. . . ........... ..... ... . ....... ...... ... . .

A 20

10

12

16

20

24

n 28

Figure 6.8: Peakedness Z of overow trac as a function of number of channels for a xed value of oered Poisson (PCTI ) trac. Notice that Z has a maximum. When n = 0 all the oered trac overows and Z = 1. When n becomes very large call attempts are seldom blocked, and the blocked attempts will be mutually independent. Therefore, the process of overowing calls converges to a Poisson process (Chap. 3). and therefore they cannot be described by classical trac models. Thus we do not know the distributions (state probabilities) of the trac streams, but we are satised (as it is often the case in applications of statistics) by characterizing the i th trac stream by its mean value m1,i and variance vi . With this simplication we will consider two trac streams as being equivalent, if the state probability distributions have same mean value and variance. The total trac oered to the group with m1 =
i=1

channels has the mean value (2.43):


g

m1,i .

(6.17)

We assume that the trac streams are independent (non-correlated), and thus the variance of the total trac stream becomes (2.44):
g

v=
i=1

vi .

(6.18)

6.4. EQUIVALENT RANDOM TRAFFIC METHOD


. ......................................................................................................................................................................................................................................................... . . ........................................................................................................................................................................................................................................................ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........................... .......................... . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .......................... ....................... ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,2 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .......................... ....................... ... .. . . . . . . . . . . . . . . . . . . . . 1,g g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... ..... ..... .... . . ....................... ....................... ....................... ....................... . . . . . . . x . . . . . . . . . . . . . . . 1 x . . . . . . . . . . . . . . . . . . ....................................................................................................................................................................................................................................................... ........................................................................................................................................................................................................................................................ ... ..

173

m ,v

m ,v . . .

. . .

. . .

. . .

m ,v

m ,v

Figure 6.9: Application of the ERT-method to a system having g independent input trac streams oered to a common group of channels. The aggregated process of the g trac streams is said to be equivalent to the trac overowing from an Erlang loss system, when the overow trac from the two systems have same mean value and variance. (6.17) & (6.18). The total trac is characterized by m1 and v. So far we assume that m1 < v. We now consider this trac to be equivalent to a trac ow, which is lost from a full accessible group and has same mean value m1 and variance v. In Fig. 6.9 the upper system is replaced by the equivalent random system at the lower part, which is a full accessible Erlang loss system with (nx + ) channels and oered trac Ax . For given values of m1 and v we therefore solve equations (6.15) and (6.16) with respect to n and A. It can be shown there exists a unique solution which we denote by (nx , Ax ). The trac lost from the total system is obtained by Erlangs B-formula: A = Ax Enx + (Ax ) . As the oered trac is m1 , the trac congestion of the system becomes: C= A . m1 (6.20) (6.19)

Important note: the blocking probability is not Enx + (Ax ). We should remember the last step (6.20), where we relate the lost trac to the originally oered trac, which in this case is given by m1 (6.17). Thus it is the trac congestion C we nd. We notice that if the overow trac is from a single primary group with PCTI trac, then the method is exact. In the general case with more trac streams the method is approximate, and it does not yield the exact blocking probability.
Example 6.4.1: Paradox In Sec. 3.6 we derived Palms theorem, which states that by superposition of many independent

174

CHAPTER 6. OVERFLOW THEORY

arrival processes, we locally get a Poisson process. This is not contradictory with (6.17) and (6.18), because these formul are valid globally. 2

6.4.2

Numerical aspects

When applying the ERTmethod we need to calculate (m1 , v) for given values of (A, n) and vice versa. It is easy to obtain (m1 , v) for given (A, n) by using (6.15) & (6.16). To obtain (A, n) for given (m1 , v), we have to solve two equations with two unknown. It requires an iterative procedure, since En (A) cannot be solved explicitly with respect to neither n nor A (Sec. 4.5). However, we can solve (6.16) with respect to n: n=A m1 + m1 +
v m1 v m1

m1 1 ,

(6.21)

so that we can nd n when A is know. Thus A is the only independent variable. We can use Newton-Raphsons iteration method to nd the unknown A by introducing the function: f (A) = m1 A En (A) = 0 . For a proper starting value A0 we iteratively improve this value until the resulting values of m1 and v/m1 become close enough to the known values. Yngv Rapp (1965 [101]) has proposed a simple approximate solution for A, which can be e used as initial value A0 in the iteration: Av+3 v m1 v 1 m1 . (6.22)

From A obtained by iteration we then get n, using (6.21). Rapps approximation itself is sucient accurate for practical applications, except when Ax is very small. The peakedness Z = v/m1 has a maximum value, obtained when n is little larger than A (Fig. 6.8). For some combinations of m1 and v/m1 the convergence is critical, but when using computers we can always nd the correct solution. Using computers we operate with non-integral number of channels, and only at the end of calculations we choose an integral number of channels greater than or equal to the obtained results (typical a module of a certain number of channels (8 in GSM, 30 in PCM, etc.). When using tables of Erlangs Bformula, we should in every step choose the number of channels in a conservative way so that the blocking probability aimed at becomes a minimum value (worst case). The above-mentioned method assumes that v/m1 is larger than one, and so it is only valid for bursty trac. Individual trac stream in Fig. 6.9 are allowed to have vi /mi < 1, provided the total aggregated trac stream is bursty. Bretschneider ([9], 1973) extended the method to include a negative number of channels during the calculations. In this way it is possible to deal with smooth trac (EERT-method = Extended ERT method).

6.4. EQUIVALENT RANDOM TRAFFIC METHOD

175

6.4.3

Individual stream blocking probabilities

The individual trac streams (parcels) in Fig. 6.9 do not have the same mean value and variance, and therefore they will not experience the same blocking probabilities in the common overow group with channels. From the above we calculate the mean blocking probability (6.20) for all trac streams aggregated. Experiments show that the blocking probability is approximately proportional to the peakedness Z = v/m1 . We can split the total lost trac into individual lost trac parcels by assuming that the trac lost by stream i is proportional to both the mean value m1,i and to the peakedness Zi = vi /m1,i . Introducing a constant of proportionality c we get: A ,i = A m1,i Zi c = A vi c . We nd the constant c from the total lost trac:
g

=
i=1 g

A ,i

=
i=1

A vi c

= A v c. Thus we nd c = 1/v. Inserting this in (6.23) the lost trac of stream i becomes: A ,i = A vi . v (6.23)

The total lost trac is thus distributed among the individual streams according to the ratio of the individual variance of a stream to the total variance of all streams. The trac congestion Ci for trac stream i, which is called the parcel blocking probability for stream i, becomes: Ci = A ,i A Zi = . m1,i v (6.24)

6.4.4

Individual group blocking probabilities

Furthermore, we can divide the blocking probability among the individual groups (primary, secondary, etc.). Consider the equivalent group at the bottom of Fig. 6.9 with nx primary channels and secondary (overow) channels. We may calculate both the blocking probability due to the nx primary channels, and also the blocking probability due to the secondary channels. The probability that the trac is lost by the channels is equal to the probability

176 that the trac is lost by the nx + to the channels: H(l) =

CHAPTER 6. OVERFLOW THEORY channels, under the condition that the trac is oered Enx +l (A) A Enx +l (A) = . A Enx (A) Enx (A) Enx +l (A) . Enx (A) (6.25)

The total loss probability can therefore be related to the two groups: Enx +l (A) = Enx (A) (6.26)

By using this expression, we can nd the blocking for each channel group and then for example obtain information about which group should be increased by adding more channels. Formula (6.25) is called the Palm-Jacobus formula.
Example 6.4.2: Example 6.3.1 continued In example 6.3.1 the blocking probability of the primary group of 8 channels is E8 (10) = 0.3383. The blocking of the overow group is H(8) = The total blocking of the system is: E16 (10) = E8 (10) H(8) = 0.33832 0.06591 = 0.02230 . 2 0.02230 E16 (10) = = 0.06591 . E8 (10) 0.33832

Example 6.4.3: Hierarchical cellular system (HCS) We consider a cellular system HCS covering three areas. The trac oered in the areas are 12, 8 and 4 erlang, respectively. In the rst two cells we introduce micro-cells with 16, respectively 8 channels. We also introduce a common macro-cell covering all three areas with 8 channels. We allow overow from micro-cells to macro-cells, but do not rearrange (take back) the calls from macro-cells to microcells when a channel becomes idle. Furthermore, we look away from hand-over trac. Using (6.15) & (6.16) we nd the mean value and the variance of the trac oered to the macro-cell:

Cell i 1 2 3 Total

Oered trac Ai 12 8 4 24

Number of channels ni (j) 16 8 0

Overow mean m1,i 0.7250 1.8846 4.0000 6.6095

Overow variance vi 1.7190 3.5596 4.0000 9.2786

Peakedness Zi 2.3711 1.8888 1.0000 1.4038

The total trac oered to the macro-cell has mean value 6.61 erlang and variance 9.28. The overow trac from an equivalent system with 10.78 erlang oered to 4.72 channels has the same mean and

6.5. FREDERICKS & HAYWARDS METHOD

177

variance. Thus we end up with a system where 12.72 channels oered 10.78 erlang. Using the Erlang-B formula, we nd the total lost trac 1.3049 erlang. Originally we oered 24 erlang, so the real trac blocking probability becomes B = 5.437%. The three areas have individual blocking probabilities. Using (6.23) we estimate the trac lost from the three trac areas to be 0.2418 erlang, 0.5006 erlang, and 0.5625 erlang, respectively. Thus the trac blocking probabilities become 2.02%, 6.26% and 14.06%, respectively. A computer simulation with 100 million calls yields the individual blocking probabilities 1.77%, 5.72%, and 15.05%, respectively. The total lost trac is 1.273 erlang, which corresponds to a blocking probability 5.30%. The accuracy of the method is thus sucient for real applications. (The condence intervals for the simulations are very small). 2

6.5

Fredericks & Haywards method

Fredericks (1980 [34]) has proposed an equivalence method which is simpler to use than Wilkinson-Bretschneiders ERT-method. The motivation for the method was rst put forward by W.S. Hayward. Fredericks & Haywards equivalence method also characterizes the trac by mean value A and peakedness Z (0 < Z < ) (Z = 0 is a trivial case with constant trac). The peakedness (4.7) is the ratio between the variance v and the mean value m1 of the state probabilities, and the dimension is [channels]. For random trac (PCTI ) we have Z = 1 and we can apply the Erlang-B formula. For peakedness Z = 1 Fredericks & Haywards method proposes that the system has the same blocking probability as a system with n/Z channels which is oered the trac A/Z. By this transformation the peakedness becomes equal to one. When Z = 1 the trac is equivalent to PCTI and apply Erlangs Bformula for calculating the congestion: E(n, A, Z) E n A , , 1 Z Z
n EZ

A Z

(6.27)

When using this method we obtain the trac congestion (Sec. 6.5.1). For xed value of the blocking probability of the Erlang-B formula we know (Fig. 4.4) that the utilization increases, when the number of channels increases: the larger the system, the higher becomes the utilization. Fredericks & Haywards method thus expresses that if the trac has a peakedness Z larger than PCTI trac, then we get a lower utilization than the one obtained by using Erlangs Bformula. If peakedness Z < 1, then we get a higher utilization. The method can easily be applied for both peaked (bursty) and smooth trac. By this method we avoid solving the equations (6.15) and (6.16) with respect to (A, n) for given values of (m1 , v). We only need to evaluate the Erlang-B formula. In general we get an non-integral number of channels and thus need to evaluate the Erlang-B formula for a continuous number of channels.

Example 6.5.1: Fredericks & Haywards method If we apply Fredericks & Haywards method to example 6.4.3, then the macro-cell has (8/1.4038)

178

CHAPTER 6. OVERFLOW THEORY

channels and is oered (6.6095/1.4038) erlang. The blocking probability is obtained from Erlangs Bformula and becomes 0.19470. The lost trac is calculated from the original oered trac (6.6095 erlang) and becomes 1.2871 erlang. The blocking probability of the system thus becomes E = 1.2871/24 = 5.36%. This is very close to the result obtained (5.44%) by the ERTmethod. and the result (5.30%) obtained by simulation. 2

6.5.1

Trac splitting

In the following we shall give a natural interpretation of Fredericks & Haywards method and at the same time discuss splitting of trac streams. We consider a trac stream with mean value A, variance v, and peakedness Z = v/A. We split this trac stream into g identical sub-streams. A single sub-stream then has the mean value A/g, variance v/g 2 , and thus peakedness Z/g because the mean value is reduced by a factor g and the variance by a factor g 2 (Example 2.3.3). If we choose the number g of identical sub-streams equal to Z, then we get the peakedness Z = 1 for each sub-stream. Let us assume the original trac stream is oered to n channels. If we also split the n channels into g identical sub-group, then each subgroup has n/g channels. Each sub-group will then have the same blocking probability as the original total system. By choosing g = Z we get peakedness Z = 1 in each sub-stream, and we may (approximately) use Erlangs Bformula for calculating the blocking probability. The above splitting of the trac into g identical trac streams shows that the blocking probability obtained by Fredericks-Haywards method is the trac congestion. The equal splitting of the trac at any point of time implies that all g trac streams are identical and thus have the mutual correlation one. In reality, we cannot split circuit switched trac into identical sub-streams. If we have g = 2 streams and three channels are busy at a given point of time, then we will for example use two channels in one sub-stream and one in the other, but anyway we obtain the same optimal utilization as in the total system, because we always will have access to an idle channel in any sub-group (full accessibility). The correlation between the sub-streams becomes smaller than one. The above is an example of using more intelligent strategies so that we maintain the optimal full accessibility. In Sec. 3.6.2 we studied the splitting of the arrival process when the splitting is done in a random way (Raikovs theorem 3.2). By this splitting we did not reduce the variation of the process when the process is a Poisson process or more regular. The resulting sub-stream point processes converge to Poisson processes. In this section we have considered the splitting of the trac load, which includes both the arrival process and the holding times. The splitting process depends upon the state. In a sub-process, a long holding time of a single call will result in fewer new calls in this sub-process during the following time interval, and the arrival process will no longer be a renewal process. In a sub-process inter-arrival times and holding times become correlated.

6.6. OTHER METHODS BASED ON STATE SPACE

179

Most attempts of improving Fredericks & Haywards equivalence method are based on reducing the correlation between the sub-streams, because the arrival processes for a single sub-stream is considered as a renewal process, and the holding times are assumed to be exponentially distributed. From the above we see that these approaches are deemed to be unsuccessful, because they will not result in an optimal trac splitting. In the following example we shall see that the optimal splitting can be implemented for packet switched trac with constant packet size. If we split a trac stream into a sub-stream so that a busy channel belongs to the sub-stream with probability p, then it can be shown that the sub-stream has peakedness Zp given by: Zp = 1 + p (Z 1) , (6.28)

where Z is the peakedness of the original stream. From this random splitting of the trac process we see that the peakedness converges to one, when p becomes small. This corresponds to a Poisson process and this result is valid for any trac process. It is similar to Raikovs theorem (3.47).

Example 6.5.2: Inverse multiplexing If we need more capacity in a network than what corresponds to a single channel, then we may combine more channels in parallel. At the originating source we may then distribute the trac (packets or cells in ATM) in a cyclic way over the individual channels, and at the destination we reconstruct the original information. In this way we get access to higher bandwidth without leasing xed broadband channels, which are very expensive. If the trac parcels are of constant size, then the trac process is split into a number of identical trac streams, so that we get the same utilization as in a single system with the total capacity. This principle was rst exploited in a Danish equipment (Johansen & Johansen & Rasmussen, 1991 [61]) for combining up to 30 individual 64 Kbps ISDN connections for transfer of video trac for maintenance of aircrafts. Today, similar equipment is applied for combining a number of 2 Mbps connections to be used by ATM-connections with larger bandwidth (IMA = Inverse Multiplexing for ATM) (Techguide, 2001 [113]), (PostigoBoix & al. 2001 [97]). 2

6.6

Other methods based on state space

From a blocking point of the view, the mean value and variance do not necessarily characterize the trac in the optimal way. Other parameters may better describe the trac. When calculating the blocking with the ERTmethod we have two equations with two unknown variables (6.15 & 6.16). The Erlang loss system is uniquely dened by the number of channels and the oered trac Ax . Therefore, it is not possible to generalize the method to take account of more than two moments (mean & variance).

180

CHAPTER 6. OVERFLOW THEORY

Trac congestion [%] 28 24 20 16 12 8 4 0 0


. .. ... ... ... ... . ... .. ... ... ... ... ... ... ......... ......... . .. ... . ... ... ... ... ... ... ... .. ... .. ... ... ..... ..... ..... . ..... . .. . ..... .... .. .... .... .... .... .... .... .... .... .... .... .... .... .... .. .. .. ... ..... .. ..... .... .... ..... ..... .... .... ...... ...... .... .... ..... ...... .... ... ... .... ....... . . . .. ... .. ... ... .... .... ... ... ... .... ... ... .... .... . ... ... .. ..... ... .. .... .... ..... ... ... ... ... ..... .... ........ ..... .... ... ..... . . ..... . ... ... ... ..... ..... .... ... .... . . .. .. .... ... . ... .. ..... .... .... ... .. ... .. .... .... .... ... .... ... ... .... .... ... ... ... .... ... ..... ... ... ..... .... ... ... ... .... .... ... ..... .... ..... .... ... ... ..... .... . .. ... ... ..... ..... .. .... ..... ... ... .... .... ..... ... ..... .. .... ... .... .... . . ..... ... ... .... .... ... ...... ... . ... ... ... .... .. . ... ... ..... ..... ..... ... ... ..... . .. .. ... .... . ...... ... ... .... .......... . . ... ... ... ..... .... ... ... .... ..... ... ... ... ..... ..... ... .... ... ..... .... ... ... ... ..... ... .... ... ... ......... ....... ... ..... .. .......... ....... ... ..... ..... . . . . ..... ... ... ... ... ......... ..... ... .... ..... . . . .. . . ..... ... ... ... .... ..... . ... .... ........ ... .... . . . .... ..... ..... ... ... ... .... .... ... ....... ... .... .. .... . ..... . .. .. ..... ... .... .... ..... ..... .... ...... ...... ..... ..... ..... ..... ...... .... .. . . . .. . . . . ... .... ..... ... .... .. ... ... ... ..... ..... ...... ... ..... ..... ...... ... ... ... .... ... .... .... .... . ... .. .. .. ... .. ... ... ... ... ... .... ... ... ... .... ... ... .... ... ... ... .... .... ... ..... ........... ... ..... .......... ... . .. ... ... .. .. ... ... ... ... ... ... ... ...... ... ... ........ ... ... ........ . ... ............. . ... ............. .. . .. . .. . .. ......... .. ........... .. ............ ... . .. .. .. . ... .. ... .. ........ . ... ....... .. ... .. .. ...... .. ........ .. ........ . .. .. . .. .. .. .. ........ . .. ....... . . . . . . .. .... .. ...... .. ..... . ......... ........ .. . .... ........ .. ...... .. ........ . . ....... ....... . ....... ........ .. .. . ...... ...... .. ... .. ... ...... ...... . . . .... . .... .. .. .... ... .... .... .... .... ...... .....

Sanders

BPP

F-H

ERT

8 9 10 Peakedness Z

Figure 6.10: Trac congestion as a function of peakedness evaluated by dierent methods for a system with 30 channels oered 20.3373 erlang. When Z = 1 this corresponds to blocking probability 1 %. We notice that BPP-method is worst case method whereas FredericksHaywards method yields the minimum blocking.

6.6.1

BPP trac models

The BPPtrac models describe the trac by two parameters, mean value and peakedness, and are thus natural candidates to model trac with two parameters. Historically, however, the concept and denition of trac congestion has due to earlier denitions of oered trac been confused with call congestion. As seen from Fig. 5.7 only the trac congestion makes sense for overow calculations. By proper application of the trac congestion, the BPP model is very applicable.
Example 6.6.1: BPP trac model If we apply the BPPmodel to the overow trac in example 6.4.3 we have A = 6.6095 and Z = 1.4038. This corresponds to a Pascal trac with S = 16.37 sources and = 0.2876. The trac congestion becomes 20.52% corresponding to a lost trac 1.3563 erlang, or a blocking probability for the system equal to E = 1.3563/24 = 5.65%. This result is quite accurate. 2

6.6. OTHER METHODS BASED ON STATE SPACE

181

6.6.2

Sanders method

Sanders & Haemers & Wilcke (1983 [108]) have proposed another simple and interesting equivalence method, also based on the state space. We will name it Sanders method. Like Fredericks & Haywards method, it is based on a transformation of state probabilities so that the peakedness becomes equal to one. The method transforms a nonPoisson trac with (mean, variance) = (m1 , v) into a trac stream with peakedness one by adding a constant (zerovariance) trac stream with mean v m1 so that the total trac has mean equal to variance v. This constant trac stream occupies vm1 channels permanently (with no loss) and we increase the number of channels by this amount. In this way we get a system with n+(vm1 ) channels oered m1 +(vm1 ) = v erlang. The peakedness becomes one, and the blocking probability is obtained using Erlangs B-formula. We nd the trac lost from the equivalent system. To obtain the trac congestion C of the original system, his lost trac is divided by the originally oered trac as the blocking probability relates to the originally oered trac m1 . The method is applicable for both both smooth m1 > v and bursty trac m1 < v and it requires only the evaluation of the ErlangB formula with a continuous number of channels.

Example 6.6.2: Sanders method If we apply Sanders method to example 6.4.3, we increase both the number of channels and the oered trac by v m1 = 2.6691 (channels/erlang). We thus have 9.2786 erlang oered to 10.6691 channels. From Erlangs B-formula we nd the lost trac 1.3690 erlang, which is on the safe side, but close to the results obtained above. It corresponds to a blocking probability E = 1.3690/24 = 5.70%. 2

6.6.3

Berkeleys method

To get an ERTmethod based on only one parameter, we can in principle keep either n or A xed. Experience shows that we obtain the best results by keeping the number of channels xed nx = n. We are now in the position where we only can ensure that the mean value of the overow trac is correct. This method is called Berkeleys equivalence method (1934). Wilkinson-Bretschneiders method requires a certain amount of computations (computers), whereas Berkeleys method is based on Erlangs B-formula only. Berkeleys method is only applicable for systems, where the primary groups all have the same number of channels.

Example 6.6.3: Group divided into primary and overow group If we apply Berkeleys method two example 6.3.1, then we get the exact solution. The idea of the method originates from this special case. 2

182

CHAPTER 6. OVERFLOW THEORY

Example 6.6.4: Berkeleys method We consider example 6.4.3 again. To apply Berkeleys method correctly, we should have the same number of channels in all three micro-cells. Let us assume all micro-cells have 8 channels (and not 16, 8, 0, respectively). To obtain the overow trac 6.6095 erlang the equivalent oered trac is 13.72 erlang to the 8 primary channels. The equivalent system then has a trac 13.72 erlang oered to (8 + 8 =) 16 channels. The lost trac obtained from the Erlang-B formula becomes 1.4588 erlang corresponding to a blocking probability 6.08%, which is a value a little larger than values obtained by other methods. In general, Berkeleys method will be on the safe side. 2

6.6.4

Comparison of state-based methods

In Fig. 6.10 we compare four dierent state-base methods. The BPP-method is on the safe side, whereas Frederick-Haywards method is the most optimistic method, having lowest blocking probability. We cannot specify which method is the best one. This depends on the actual system generating the overow trac, which in general is a superposition of many trac streams.

6.7

Methods based on arrival processes

The models in Chaps. 4 & 5 are all characterized by a Poisson arrival process with state dependent intensity, whereas the service times are exponentially distributed with equal mean value for all (homogeneous) servers. As these models all are independent of the service time distribution (insensitive, i.e. the state probabilities only depend on the mean value of the service time distribution), then we may only generalize the models by considering more general arrival processes. By using general arrival processes the insensitivity property is lost and the service time distribution becomes important. As we only have one arrival process, but many service processes (one for each of the n servers), then we in general assume exponential service times to avoid complex models.

6.7.1

Interrupted Poisson Process

In Sec. 3.7 we considered Kuczuras Interrupted Poisson Process (IPP) (Kuczura, 1977 [79]), which is characterized by three parameters and has been widely used for modeling overow trac. If we consider a full accessible group with n servers, which are oered calls arriving according to an IPP (cf. Fig. 3.9) with exponentially distributed service times, then we can construct a state transition diagram as shown in Fig. 6.11. The diagram is two-dimensional. State [i, j] denotes that there are i calls being served (i = 0, 1, . . . , n), and that the arrival process is in phase j (j = a : arrival process on, j = b : arrival process o ). By using the

6.7. METHODS BASED ON ARRIVAL PROCESSES


...... ............. .............. ........ ...... .............. ............... ..... ......... .... ... ... ...... ...... ... ...... ...... .... ... . ...... ...... ... ... . .. .. .. .... .... ........... .. ... .. . ... . ... .. .......... ... .... ... ............... ... .. . .. .. .. .. . . ... .. .. .. ... .. .. .. .. .. .. .. .. . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. . . . .. .. .. .. .. .. ... ... .. .. . .. .. .... . .. . ..... ........ ... . ..... ..... . ............... .... . ....... .... .. . .. . .. ... .......... ...... ... .. ........ ..... ... ... ........... ...... ... ........... ...... ... .. .. .. . .. .. ..... . .. .. ................ ................ .. . ... . . .. ................ .. ............... ............... .. ........ ...... .. .. .. .. .. .. . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. .. .. . . . .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. . ... .. ... .. ... . . . . . . . . ......... . ........... ........... ............ ........... ........... ... ... ... .. .. ... .. ... ... .. .. .. .. .. .. . . . .. .. .. . . . . . . . . . . . .. . . . . . .. . ............. .................... . .................. . . . . . . . . .. . . . .. . . . . . ................... .... .............. ................... . . . . . . . . . .. . .. . . . .. .. .. . . . .. .. .. .. ... .. .. ... ... .. .. .. ... .... ..... ..... ...... ........... ........... ..... ..... ........... .... ..

183
... ...... ...... ..... ....... ........ ...... .......... ..... ......... ..... ... ... ........ .... .. ... .... ........... ..... .... ........... .... ... ........... .... ... ... ... .. .. .. .. . .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. .. . ... ... ... .. ... .. ... .. ... . ... .. . . . .... . . . .. .. .. ... . .. .... . . ... .......... .... ... .. ........ ..... ... ........... ..... ... ........... . .. ..... ....... .. ................. .. ......... ...... .. . . ............... .. .. .. ...... .. . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. ... . . . . ......... ........... ............ ........... ... ... .. .. ... ... .. .. .. .. . . .. .. . . . . . . . . . . . . ................... . . . ................... . . ................... .. .. . . . . .................... . . . . . . . .. . . .. .. . . .. .. ... .. .. ... .. .. ... ..... ...... ........... ..... ..... ........... ..

0a

1a

2a

n1 a

na

(n1)

0b

1b

2b

(n1)

n1 b

nb

Figure 6.11: State transition diagram for a full accessible loss system with n servers, IPP arrival process (cf. Fig. 3.9) and exponentially distributed service times (). node balance equations we nd the equilibrium state probabilities p(i, j). Time congestion E becomes: E = p(n, a) + p(n, b) . (6.29) Call congestion B becomes: B= p(n, a)
n

p(i, a)
i=0

E.

(6.30)

From the state transition diagram we have pon = po . Furthermore, pon + po = 1. From this we get:
n

pon =
i=n n

p(i, a) =

, + . +

po =
i=n

p(i, b) =

Trac congestion C is dened as the proportion of the oered trac which is lost. The oered trac is equal to: A= The carried trac is: Y =
i=0

pon 1 = . pon + po +
n

i {p(i, a) + p(i, b)} . C=

(6.31)

AY . (6.32) A The trac congestion will be equal to the call congestion as the arrival process is a renewal process. But this is dicult to derive from the above. As shown in Sec. 3.7.1 the interarrival times are hyper-exponentially distributed with two phases (H2 ). If we apply a Markov

From this we obtain

184

CHAPTER 6. OVERFLOW THEORY

modulated Poisson process (MMPP), then in principle we may get any number of parameters to model inter-arrival times.
Example 6.7.1: Calculating state probabilities for IPP models The state probabilities of Fig. 6.11 can be obtained by solving the linear balance equations. Kuczura (1973, [78]) derived explicit expressions for the state probabilities, but they are complex and not t for numerical evaluation of large systems. The way to calculate state probabilities in a very accurate way is to use the principles described in Sec. 4.4.1: let p(n, b) = 1, by using node equation for this state [n, b] we obtain the value of p(n, a) relative to p(n, b), and normalize the two state probabilities so they add to one. by using node equation for state [n, a] we obtain p(n1, a) relative to the previous states, and normalize the state probabilities obtained so far. by using node equation for state [n1, b], we obtain p(n1, b) and normalize all the obtained state probabilities. in this way we zigzag down to state [0, a] and obtain normalized probabilities for all states. The relative values of for example p(0, a) and p(0, b) depend on the number of channels n. Thus we cannot truncate the state probabilities and re-normalize for a given number of channels, but we have to calculate all state probabilities from scratch for every number of channels. 2
p1 p1 p1 p1 p1

...... ...... ............ ............ ....... ..... ............. ..... ........ ..... ........ .... .... .... ... .... ... ..... ... ... .... .... ........... ..... .... ... ........ ..... .... .... ........... .... .... ... ... .... .............. . .. ............... .. . .. .. . .. . .. .... . . ... .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. .. .. .. . .. . .. .. ... .. ... .. .. . . .. ... .. . . .. .. . ... . ... .. . . .. ............... .......... .... .. ........... .... .. ........ .. ..... .. ............ ...... .. ............ ..... .. .. .... . .. . .. . ..... . .. . ... ... ... ... . ... .. . . . . . . . . ... ... . ... ... . ....... ...... .............. ................. ... .............. . ............ ............ . ... . . . . ... . ... . . . . .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . . . . .. .. .. .. .. .. . . . . . . . . . .. .. .. .. .. .. . . . . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . 1. 1. 1. . . . . . . . . . .. .. .. . . . .. .. . . . 2... 2... 2... .. . . . . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. .. .. .. . . . . . . . . . . . . .. .. .. ... ... ... .. .. .. . . .. .. .. .. .. . . . . .. . . . ... .. .. ..... ..... .. ..... ... .. ..... ... ... ..... ... .. .... ....... .... ...... .... ...... .. .... .. .... .. . .... .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . .. . . . . . ................... ................... . ................... . . . . . . .. . . . . . ... .............. ... . .............. ... . .............. . . . . .. . .. . .. . . . . . . . . . .. .. .. .. . . . .. .. .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ......... ......... ......... ......... ........ ........

0a

1a

2a

0b

1b

2b

.. ...... ............... ......... ...... ...... .......... ..... ........ .... ... ........ .. ........ ..... ... .... ........... ..... .... ........... .... . .. . . . . ... ... . ... .. .. ... . . . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. .. .. . . .. .. .. .. .. . . . .. ... .... . .... .............. ............ ... . . ................... ............... . . .... . . .... ..... .. . . ..... ... .. . .. . . .. ...... ......... .. . ................. .. . ... .. . . ............... ... . ............ . ... . ... ... . .. .. .. . . . . . .. .. . .. . . .. .. . . . . . . . .. .. . . . .. .. . . . . . . .. .. . . . .. .. . . . . . . . .. .. . . . .. .. . . . . . . . .. .. . . .. .. . . . . . . . .. .. . . . .. .. . 1. 1. . . . .. .. . .. .. . . . . . . . . . 2.. 2... .. . . . . . . . . . .. .. . . . .. .. . . . . . . . .. .. . .. .. . .. . . . . . .. .. ... ... .. .. .. .. .. .. . . .. . . . . . . .. .. . .. .. ............. .............. .. ............. ............... . . . . . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . ................... . . ................... . ................... . . . .. .. . ... . .............. . . . . .. . . . . . . .. .. .. .. . . .. .. ... ... ... ... ... ... ... ... ......... ......... ......... .........

n1 a

na

(n1)

(n1)

n1 b

nb

Figure 6.12: State transition diagram for a full accessible loss system with n servers, Cox2 arrival processes (cf. Fig. 2.13) and exponentially distributed service times ().

6.7.2

Cox2 arrival process

In Sec. 3.7 we noticed that a Cox2 arrival process is more general than an IPP (Kuczura, 1977 [79]). If we consider Cox2 arrival processes as shown in Fig. 2.13, then we get the state transition diagram shown in Fig. 6.12. From this we nd under the assumption of statistical equilibrium the state probabilities and the following performance measures.

6.7. METHODS BASED ON ARRIVAL PROCESSES Time congestion E: E = p(na) + p(nb) . Call congestion B: B= p 1 p(na) + 2 p(nb)
n n

185

(6.33) . (6.34)

p 1 Trac congestion C:

i=0

p(ia) + 2

p(ib)
i=0

The oered trac is the average number of call attempts per mean service time. The mean inter-arrival time is (Fig. 2.13): ma = 1 2 + (1 p)1 1 + (1 p) = . 1 2 1 2

The oered trac then becomes A = (ma )1 . The carried trac Y is given by (6.31) applied to Fig. 6.12 and then we nd the trac congestion C by (6.32). If we generalize the arrival process to a Coxk arrival process, then the state-transition diagram is still two-dimensional. By the application of Coxdistributions we can in principle take any number of parameters into consideration. If we generalize the service time to a Coxk distribution, then the state transition diagram becomes much more complex for n > 1 because we have a service process for each server, but only one arrival process. Therefore, in general we always generalize the arrival process and assume exponentially distributed service times.
vbi-2010.03-16

186

CHAPTER 6. OVERFLOW THEORY

Chapter 7 Multi-Dimensional Loss Systems


In this chapter we generalize the classical teletrac theory to deal with service-integrated systems (e.g. B-ISDN). Every class of service corresponds to a trac stream. Several trac streams are oered to the same group of n channels. In Sec. 7.1 we consider the classical multi-dimensional Erlang-B loss formula. This is an example of a reversible Markov process which is considered in more details in Sec. 7.2. In Sec. 7.3 we look at more general loss models and strategies, including service-protection (maximum allocation) and multi-rate BPPtrac. The models all have the so-called product-form property, and the numerical evaluation is very simple, using either the convolution algorithm for loss systems which aggregates trac streams (Sec. 7.4), or state-based algorithms which aggregate the state space (Sec. 7.6). All models considered are based on exible channel/slot allocation, which means that if a call requests d > 1 channels, then these channels need not be adjacent. The models may be generalized to arbitrary circuit switched networks with direct routing, where we calculate end-to-end blocking probabilities (Chap. 8). All models considered are insensitive to the service time distribution, and thus they are very robust for applications.

7.1

Multi-dimensional Erlang-B formula

We consider a group of n trunks (channels, slots), which is oered two independent PCT-I trac streams: (1 , 1 ) and (2 , 2 ). The oered trac becomes A1 = 1 /1 , respectively A2 = 2 /2 , and the total oered trac is A = A1 + A2 . In this section each connection requests one channel. Let (x1 , x2 ) denote the state of the system, i.e. x1 is the number of channels used by stream

188

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

1 and x2 is the number of channels used by stream 2. We have the following restrictions: 0 x1 n , 0 x2 n , 0 x1 + x2 n . The state transition diagram is shown in Fig. 7.1. Under the assumption of statistical equilibrium, the state probabilities are obtained by solving the global balance equations for each node (node equations). In total we have (n + 1)(n + 2)/2 equations. The system has a unique solution. So if we somehow nd a solution, then we know that this is the correct solution. Many models can, however, be solved in a much simpler way. As we shall see in next section, this diagram corresponds to a reversible Markov process, which has local balance, and furthermore the solution has product form. We can easily show that the global balance equations are satised by the following state probabilities which may be written on product form: p(x1 , x2 ) = Q p1 (x1 ) p2 (x2 ) = Q Ax1 Ax2 1 2 , x1 ! x2 ! (7.2) (7.1)

where p1 (x1 ) and p2 (x2 ) are one-dimensional truncated Poisson distributions for trac stream one, respectively two. Q is a normalization constant, and (x1 , x2 ) must full the above restrictions (7.1). As we have Poisson arrival processes, the PASTA-property (Poisson Arrivals See Time Averages) is valid, and time, call, and trac congestion for both trac streams are all equal to p(x1 + x2 = n). By the Binomial expansion (2.36), or by convolving two Poisson distributions, we nd the following aggregated state probabilities, where Q is obtained by normalization:
x

p(x1 + x2 = x) = Q

x1 =0 x

p1 (x1 ) p2 (x x1 )

(7.3)

Axx1 Ax1 1 2 = Q x1 ! (x x1 )! x =0
1

(7.4)

1 = Q x! x = Q Ax , x!

1 =0

x Ax1 Axx1 1 2 x1

(7.5)

(7.6)

7.1. MULTI-DIMENSIONAL ERLANG-B FORMULA


............. .................. ..... ... ... .. .. .. . . .. . . . . . . . . . . . . . . . . .. .. .. ... .. ... ................... .................. . . . .. . .. . ... . . . ... . ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . . 2. . . . . . . . . . . . . . . . . . . . . . .. . . .. . ... . ... . .. . . . .. . .. . ............... ................ .................. ................... .. .. .. ... .. 1 ..... ......... ... .. .. .................................. .. .. . . ................................. .. . . . .. . . . . . . . . . . . . . . . . . . . ... . . . . ............................... .. ......................... .. .. ... .. .. .. ........ .. .. .. .... ... ... . . ... .... ................... ................... .................. .... ............ . . . . . .. .. . . . . 1 ... . .. . . .. .. . . . . .. ... . .. ... . ... ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . ... .. . . . . . ... . . ... ... . ... . ... .. .. . . . . .. .. . . .. .. . . . . . . .

189

0, n

0, n1

1, n1

.. .. .

.. .. .

.. .. .

(n1) 2

(n1) 2 ..... ...

.. .. .

.. .

.. .. . .

.. .. .

.. .. .

.. .. . .. .. . .. . . . . .. . . . . .. .. . . . .. .. . .. . . . .. 2 ..... . 2 ..... . . . .. . . . . ... .. ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. ... ... .. . ... .. . ... .. . ... ... ... ... . . . .. .. . . . . . .. .. . . . . . . . . . .......... .. . ... .......... ... ........... .. . . .. . ... . ... . ... ... ... ... . ...... ...... . ...... ...... ....... ..... ...... ...... ...... . .... .... .... .. .. .. . . 1................... 1........................... .. .. .. .. .. .. ... ................................... .................. 1 ....... ... .. .... .. .. . . . . . ................... . .................. .................. .............. . . . ... . ... . ... .... . ...... . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . ..... ..... ..... . . . . .. . . . .. .. . ... ................ . ... ................................ . ................................ ...... . .. .. .. .. . . ................................. ................................. ................ .. . . . .. .. ... . ... ... ... .... ... ... ... ..... ... ... ... ................. ................ ................ ................ .. .............. .. ............. . . . .. .. . . .. . .. .. .. .. . . . 1 .. . . . . . . 1 .. .. ... . . . ... ... ... .. ... . . . .. ... ... . . . ... . ... . ... . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . 2 2 2 . . . . . . 2. . 2. . 2. . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . ... .. . ... .. . ... ... ... ... . . . .. .. .. . . . . . . .. .. . . . . .. . . .. .. . . .. . . . . . . . . . . ... ............ ... ............ ................ ................. ................. .................. .................. ................... .. .. . .. .. .. .. ... ... .. 1 ................... 1 ...... ......... ... .. .. .. .. .... . . ... . .. .................................. .................. 1 .. .. . ................. ................. ... . ... . ... . . . ................................ .. . . ............................... ............................... ................. . . . ... . ..... .. . . ..... . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . .. . . ... . . . . ..... . . . . .. .... . ...... ................. .................. .. .................. ... . .. . .................................. . ................................. ................................ ................................. . ... ................. .. .. . .. . .. . . . . .. . .. .. .. .. ... ... ... ... ... .. .. .. .. .. .. .. .. .... .... . ..... .................. ................. .................. ................. ................. ............... ............... . . . ............. . . . . .. .. . .. .. . . . . . . . . . 1 .. .. .. . . . . .. 1 1 .. .. . . . . ... ... ... . . . . ... ... ... ... . . . ... ... ... ... . . . . . . . . . .. . .. . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . 2 2. . 2 2. . 2 2. . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . ... .. . ... . ... .. . . . . . . . . ... ... ... ... . . . . ... ... ... ... . . . . . .. .. .. . .. . . . . . . .. .. .. .. . . . . .. .. . . . . . . . ............... ................ ................. ................. ................. ................. .................. .................. .................. .................. . .. .. ... .. .. .. .. .. .. .. .. .. 1................................. 1..................... ........... 1 1 .... .......... .. ... .. .. .. .. .... ... . .. ... ... .. .................. .................................. ................... ................... . ................. ................. ... .. ... ... . .... . . ................... ................. ................................ .................. .. . . . . . . . . ... . ... . ... . ..... ..... ...... .... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . ... . ... . ... . . . . ..... . ..... ..... . . . ................................ . ................. ................. ................................ ................................ .................. ... ................. ... ................................. . . ................................. . . ................................ ..... .. .. ... .. . .. . .. . ... .. . . . .. .. .. ... ... ... .. .. ... ... ... .. . .. .. . .. . .. .. .. .. .. ... . .... .... . ..... .................. ................... .................. .................. .................. .................. .................. ................ ............... ..............

.. .. . .

2..................

0, 2

1, 2

2, 2

0, 1

1, 1

2, 1

n1, 1

(n1)

0, 0

1, 0

2, 0

n1, 0

n, 0

21

(n1) 1

n1

Figure 7.1: Two-dimensional state transition diagram for a loss system with n channels which are oered two PCTI trac streams. This is equivalent to a state transition diagram for a loss system M/H2 /n, where the hyper-exponential distribution H2 is given by (7.8).
n

where A = A1 + A2 , and the normalization constant is obtained by: Q the Truncated Poisson distribution (4.9).

=
i=0

Ai . This is i!

We may also interpret this model as an Erlang loss system with one Poisson arrival process and hyper-exponentially distributed holding times as follows. The total arrival process is a superposition of two Poisson processes and thus a Poisson process itself with arrival rate: = 1 + 2 . (7.7) The holding time distribution is obtained by weighting the two exponential distributions according to the relative number of calls per time unit and becomes a hyper-exponential distribution (random variables in parallel, Sec. 2.3.2): f (t) = 1 2 1 e1 t + 2 e2 t . 1 + 2 1 + 2 (7.8)

190 The mean service time is: m1 = m1 =

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

1 2 1 A1 + A2 1 + = , 1 + 2 1 1 + 2 2 1 + 2 A , (7.9)

which is in agreement with the denition of oered trac (1.2). Thus we have shown that Erlangs loss model is also valid for hyper-exponentially distributed holding times. This is a special case of the general insensitivity property of Erlangs B formula. We may generalize the above model to N trac streams: p(x1 , x2 , , xN ) = Q p1 (x1 ) p2 (x2 ) . . . pN (xN ) AxN Ax1 Ax2 2 1 N , = Q x1 ! x2 ! xN !
N

0 xj n ,

j=1

xj n , (7.10)

which is the general multi-dimensional Erlang-B formula. By the multinomial theorem (2.96) this can be reduced to: p(x1 + x2 + . . . + xN = x) = Q (A1 + A2 + . . . + AN )x x!
N

Ax = Q x!

where A =
j=1

Aj ,

The global state probabilities can be calculated by the following recursion, where q(x) denotes the relative state probabilities, and p(x) denotes the absolute state probabilities. From a cut equations Erlangs loss system we have: q(x) = 1 1 A q(x 1) = x x
N

j=1

Aj q(x 1) ,
n

q(0) = 1 ,

(7.11)

q(x) , p(x) = Q(n)

0 x n,

where Q(n) =
i=0

q(i) .

(7.12)

If we use the recursion with normalization in each step (Sec. 4.4), then we get the recursion formula for ErlangB. For all services the time congestion is E = p(n), and as the PASTAproperty is valid, this is also equal to the call and trac congestion. Multi-dimensional systems were rst mentioned by Erlang and more thoroughly dealt with by Jensen in the Erlangbook (Jensen, 1948 [57]).

7.2. REVERSIBLE MARKOV PROCESSES


Example 7.1.1: Innite server (IS) system If the number of channels is innite, then we get: p(x1 , x2 , . . . , xN ) = p1 (x1 ) p2 (x2 ) . . . pN (xN ) = Ax1 A1 1 e x1 ! Ax2 A2 2 e x2 ! ... AxN AN N e xN !

191

(7.13)

By using the multi-nominal expansion (2.94) the global state probabilities obtained by aggregating the detailed states probabilities become Poisson distributed (4.6): p(x1 + x2 + . . . + xN = x) = = (A1 + A2 + . . . + AN )(x1 +x2 +...+xN ) (A1 +A2 +...+AN ) e (x1 + x2 + . . . + xN )!

A A e , x! where the mean value is A = A1 + A2 + . . . + AN . The product of Poisson distributions is already normalized because we dont truncate the state space. 2

7.2

Reversible Markov processes

In the previous section we considered a two-dimensional state transition diagram. For an increasing number of trac streams the number of states (and thus equations) increases very rapidly. However, we may simplify the problem by exploiting the structure and properties of the state transition diagram. Let us consider the two-dimensional state transition diagram shown in Fig. 7.2. The process is reversible if there is no circulation ow in the diagram. Thus, if we consider four neighboring states, then ow in clockwise direction must equal ow in opposite direction (Kingman, 1969 [72]), (Sutton, 1980 [111]). From Fig. 7.2 we have the following average number of jumps per time unit: Clockwise: [x1 , x2 ] [x1 , x2 + 1] [x1 + 1, x2 + 1] [x1 + 1, x2 ] Counter clockwise: [x1 , x2 ] [x1 + 1, x2 ] [x1 + 1, x2 + 1] [x1 , x2 + 1] [x1 + 1, x2 ] : [x1 + 1, x2 + 1] : [x1 , x2 + 1] : [x1 , x2 ] : p(x1 , x2 ) 1 (x1 , x2 ) p(x1 +1, x2 ) 2 (x1 +1, x2 ) p(x1 +1, x2 +1) 1 (x1 +1, x2 +1) p(x1 , x2 +1) 2 (x1 , x2 +1) . [x1 , x2 + 1] : [x1 + 1, x2 + 1] : [x1 + 1, x2 ] : [x1 , x2 ] : p(x1 , x2 ) 2 (x1 , x2 ) p(x1 , x2 +1) 1 (x1 , x2 +1) p(x1 +1, x2 +1) 2 (x1 +1, x2 +1) p(x1 +1, x2 ) 1 (x1 +1, x2 ) ,

We can reduce both expressions by the state probabilities and then obtain the conditions given by the following theorem.

192

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

Theorem 7.1 (Kolmogorovs criteria) A necessary and sucient condition for reversibility is that the following two ows are equal: Clockwise: 2 (x1 , x2 ) 1 (x1 , x2 +1) 2 (x1 +1, x2 +1) 1 (x1 +1, x2 ) , Counter clockwise: 1 (x1 , x2 ) 2 (x1 +1, x2 ) 1 (x1 +1, x2 +1) 2 (x1 , x2 +1) .
. .. .. .. . . . . . . . . . . . . . . . . .

. . . . .

 1 (x1 , x2 +1)    .. .. ................... .................. ...................................................................... ...................................................................... . . . . ......................................... x1 , x2 +1 ... x1 +1, x2 +1 .... .. . .. ................ ....................................................................... ...................................................................... .. ............... . ....................................... ..  ..... .  . . . . . . . .. .. . . . 1 (x1 +1, x2 +1) .. ... . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . .

. . . . . . . . . . . . . . . . .. . . .. .. . . .

. . . . .

. .. . .. . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . . ... . .. . . . .

. . . . .

2 (x1 , x2 )

2 (x1 , x2 +1)

2 (x1 +1, x2 )

2 (x1 +1, x2 +1)

....................................... x1 , x2 .. . ....................................... ...


. . . .. . ... . . . . . . . . . . . . . . . .

. ...................................................................... ...................................................................... .. ...................................................................... ...................................................................... . .

. . .

. . . . . . . . . . . . . . . .. . .. .. . . .

1 (x1 , x2 )

1 (x1 +1, x2 )

...... .

x1 +1, x2
.. . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . . . . .

. ................... .................. . .................. .................. .. .

. . .

. . .

. . .

Figure 7.2: Kolmogorovs criteria: a necessary and sucient condition for reversibility of a two-dimensional Markov process is that the circulation ow among four neighbouring states in a square equals zero: Flow clockwise = ow counter-clockwise (Theorem 7.1). If these two expressions are equal, then there is local balance or detailed balance. A necessary condition for reversibility is thus that if there is a ow (an arrow) from state x1 to state x2 , then there must also be a ow (an arrow) from state x2 to state x1 , and the ows must be equal. It can be shown that this is also a sucient condition. We may then apply cut equations locally between any two connected states. For example, we get from Fig. 7.2: p(x1 , x2 ) 1 (x1 , x2 ) = p(x1 + 1, x2 ) 1 (x1 + 1, x2 ) . (7.14)

We can express any state probability p(x1 , x2 ) by state probability p(0, 0) by choosing any path between the two states (Kolmogorovs criteria). If we for example choose the path: (0, 0), (1, 0), . . . , (x1 , 0), (x1 , 1), . . . , (x1 , x2 ) , then we obtain the following balance equation: p(x1 , x2 ) = 1 (0, 0) 1 (1, 0) 1 (x1 1, 0) 2 (x1 , 0) 2 (x1 , 1) 2 (x1 , x2 1) p(0, 0) 1 (1, 0) 1 (2, 0) 1 (x1 , 0) 2 (x1 , 1) 2 (x1 , 2) 2 (x1 , x2 )

7.3. MULTI-DIMENSIONAL LOSS SYSTEMS State probability p(0, 0) is obtained by normalization of the total probability mass. The condition for reversibility will be fullled in many cases, for example when: 1 (x1 , x2 ) = 1 (x1 ) , 2 (x1 , x2 ) = 2 (x2 ) , 1 (x1 , x2 ) = x1 1 , 2 (x1 , x2 ) = x2 2 .

193

(7.15) (7.16)

If we consider a multi-dimensional loss system with N trac streams, then any trac stream may be a state-dependent Poisson process, in particular BPP (Bernoulli, Poisson, Pascal) trac streams. For N dimensional systems the conditions for reversibility are analogue to (Theorem 7.1). Kolmogorovs criteria must still be fullled for all possible paths. In practice, we experience no problems, because the solution obtained under the assumption of reversibility will be the correct solution if and only if the node balance equations are fullled. In the following section we use this as the basis for introducing general advanced multi-service trac model which are robust and easy to deal with.

7.3

Multi-Dimensional Loss Systems

In this section we consider generalizations of the classical teletrac theory to cover several trac streams (classes, services) oered to a link with a xed bandwidth, which is expressed in channels of basic bandwidth units (BBU). Each trac stream may have individual parameters and may be state-dependent Poisson arrival processes with multi-rate trac and class limitations. This general class of models is insensitive to the holding time distribution, which may be class dependent with individual parameters for each class. We introduce the generalizations one at a time and present a small case-study to illustrate the basic ideas.

7.3.1

Class limitation

In comparison with the case considered in Sec. 7.1 we now restrict the number of simultaneous calls for each trac stream (class). Thus, we do not have full accessibility, but unlike overow systems where we physically only have access to a limited number of specic channels, then we now have access to all channels, but at any instant we may only occupy a maximum number of channels. This may be used for the purpose of service protection (virtual circuit protection = class limitation = threshold priority policy). We thus introduce restrictions to the number of simultaneous calls in class j as follows: where 0 xj n j n ,
N

j = 1, 2, . . . , N ,
N

(7.17)

j=1

xj n

and
j=1

nj > n .

194

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

If the latter restriction is not fullled, then we get a system with separate groups, corresponding to N ordinary independent one-dimensional loss systems. Due to these restrictions the state transition diagram is truncated. This is shown for two trac streams in Fig. 7.3.
. . . . . .. . .. ... ... . . . . . . .

Stream 2
. . .. .. . .. .. .

+ n2 + + j+ + 0 + +

.. .. . .

.. .. .

Blocking for stream 1 Blocking for stream 2


. . .. .. . .. .. .

.. .. . .....

(x1 , x2 ) + i

.. .. .

.. .. ... x1 +x2 = n . .. . .. .. . .. .. . . .. .. .

.. . .. .

.. .. .

.. .. .

.. .. .

+ n1

.. .. .

.. .. . .

.. .. .

.. .. .

.. .. . .

.. .. + n

Stream.......1 .

............ ............ .. ..

Figure 7.3: Structure of the state transition diagram for two-dimensional trac processes with class limitations (cf. 7.17). When calculating the equilibrium probabilities, state (x1 , x2 ) can be expressed by state (x1 , x2 1) and recursively by state (x1 , 0), (x1 1, 0), and nally by (0, 0) (cf. (7.15)). We notice that the truncated state transition diagram still is reversible, and that the values of p(x1 , x2 ) relative to the value p(0, 0) are unchanged by the truncation. Only the normalization constant is modied. In fact, due to the local balance property we can remove any state without changing the above properties. We may consider more general class limitations to subsets of trac streams so that any trac stream has a minimum (guaranteed) number of allocated channels.

7.3.2

Generalized trac processes

We are not restricted to consider PCTI trac only as in Sec. 7.1. Every trac stream may be a state-dependent Poisson arrival process with a linear state-dependent death (departure) rate (cf. (7.15) and (7.16)). The system still fulls the reversibility conditions given by Theorem 7.1. The product form is valid for BPP trac streams and more general statedependent Poisson processes. If all trac streams are Engset (Binomial) processes, then we get the multi-dimensional Engset formula (Jensen, 1948 [57]). As mentioned above, the system is insensitive to the holding time distributions with individual mean values. Every trac stream may have its own individual holding time distribution.

7.3. MULTI-DIMENSIONAL LOSS SYSTEMS

195

7.3.3

Multi-rate trac

In service-integrated systems the bandwidth requested depend on the type of service. We choose a Basic Bandwidth Unit (BBU) and split the available bandwidth into n BBUs. The BBU is called a channel, a slot, a server, etc. The smaller the basic bandwidth unit is, the more accurate we may model dierent services on a link, but the state space increases with ner granularity. Thus a voice telephone call may only require one channel (slot), whereas for example a video connection may require d channels simultaneously. Therefore, we get the capacity restrictions: 0 xj = ij dj nj n , and 0
N

j = 1, 2, . . . , N ,

(7.18)

j=1

i j dj n ,

(7.19)

where ij is the actual number of type j calls (connections) and xj is the number of channels (BBU) occupied by type j. The resulting state transition diagram will still be reversible and have product form. The restrictions correspond for example to the physical model shown in Fig. 7.5. Oered trac Aj is usually dened as the trac carried when the capacity in unlimited. If we measure the carried trac Yj as the average number of busy channels, then the lost trac measured in channels becomes:
N N

A =
j=1

Aj dj

Yj ,
j=1

(7.20)

where we as usual dene Aj = j /j .


Example 7.3.1: Basic bandwidth units For a 640 Mbps link we may choose BBU = 64 Kbps, corresponding to one voice channel. Then the total capacity becomes n = 10,000 channels. For a UMTS CDMA system with chip rate 3.84 Mcps, one chip is one bit from the direct sequence spread spectrum code. We can choose the BBU as a multiple of 1 cps. In practice the BBU depends on the code length. A 10bit code allows for a granularity of 1024 channels, and the BBU becomes 3.75 Kcps. (We consider gross rates). For variable bit rate (VBR) services we may statistically dene an eective bandwidth which is the capacity we need to reserve on a link with a given total capacity to fulll a certain grade-of-service. 2

Example 7.3.2: Rnnbloms model o The rst example of a multi-rate trac model was published by Rnnblom (1958 [107]). The paper o

196

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS


Stream 1: PCTI trac 1 = 2 calls/time unit 1 = 1 (time units1 ) Stream 2: PCTII trac S2 = 4 sources 2 = 1/3 calls/time unit/idle source 2 = 1 (time units1 ) 2 = 2 /2 = 1/3 erlang per idle source Z1 = 1 (peakedness) d1 = 1 channel/call A1 = 1 /1 = 2 erlang n1 = 6 = n Z2 = 1/(1 + 2 ) = 3/4 (peakedness) d2 = 2 channels/call A2 = S2 2 /(1 + 2 ) = 1 erlang n2 = 6 = n

Table 7.1: Two trac streams: a Poisson trac process (Example 4.5.1) and a Binomial trac
process (Example 5.5.1) are oered to the same trunk group.

considers a PABX telephone exchange with both-way channels with both external (outgoing and incoming) trac and internal trac. The external calls occupies only one channel per call. The internal calls occupies both an outgoing channel and an incoming channel and thus requires two channels simultaneously. It was shown by Rnnblom that this model has product form. o 2

Example 7.3.3: Two trac streams We now illustrate the above models by a small instructive case-study. The principles and procedures are the same as for the general case considered later by the convolution algorithm (Sec. 7.4.1). We consider a trunk group of 6 channels which is oered two trac streams, specied in Tab. 7.1. We notice that the second trac stream is a multi-rate trac stream. We may at most have three type-2 calls in our system. For state probabilities we need only specify oered trac, not individual values of arrival rates and service rates. The oered trac is as usually dened as the trac carried by an innite trunk group. For multi-rate trac we have to consider trac measured either in connections or channels. We get a two-dimensional state transition diagram shown in Fig. 7.4. The total sum of all relative state probabilities equals 20.1704. So by normalization we nd p(0, 0) = 0.0496 and we get state probabilities and marginal state probabilities p(x1 , ) and p(, x2 ) (Table 7.2). The global state probabilities are shown in Table 7.3. Performance measures for trac stream 1 (PCT-I trac): Due to the PASTA-property time congestion (E1 ), call congestion (B1 ), and trac congestion (C1 )

7.3. MULTI-DIMENSIONAL LOSS SYSTEMS

197

PCT-II
. . . .. .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .......... ............ ... .. ... .. . . .. . . . . . . . . . . . . . . . . . .. .. . .. . .. ... .. .. ........... . ... .. . ......... . . .. . ... ... . . ... . . . . . . . 2. . . . . . . . . . . . . 3 . ..... . ... . . .. . . . . ........ .......... .. .... .... ......... .... .. ............ ... .... ...... .... ...... .... .... .. .. .. ... .. .. .. .. ..... .. ........................ .. ....................... . . ...................... ....... . ...... . . . .................... .. ... . . ... . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . .... ..... . . . . . . .. . ... . .. ..................... ..................... . . ... . ... .. ...................... ...................... . . .. .. .. ... .. . ... .. .. ... .. .. .. .... .... .... .... ............ ... ............ ........... . .. .. .. . ...... . ......... ...... . . . . .. . . .. .. . . . ... ... ... . . . . . ... ... ... . ... ... ... . . . . . . . . .. . .. . . . . . . . . . . . . . 3. . 3. . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . ... . ... 3 . ... 3 . ... 3 . ... . . . . .. .. . . . .. . . ... .. .. . .. . . . . . . .. ......... ......... ......... .. . . . . . . . .... .......... .......... .... . ........... .. ......... ............ .. .... ..... .... ...... .... ..... .... .... ...... ... ... ... .. .. .. .. ... .. ... .. .. .. .. .. .. ...................... .... . .... ... ........................ ... . .. .. . . ....................... ....................... . . ..................... ..... .. ... . . ..................... ..................... ... . ...................... . . . . . . . . . ... ... . . . ... . ... . ... .. ...... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . .. . . .. . . . .. ... . . ... . .. ... .. . . . .. . . . . .................... . .................... .................... .................... .. . .... .. . . ..................... ..................... .. ..................... ..................... .. . .. .. .. .. .. . . .. .. . . .. .. .. .. .. ... .. .. ... .. ... .. ... ... ... ... ... ... ... ... .... ..... .... ..... .... ..... ... ... ... ... ... ............ ... ......... .. . . . . . . .. .. . .. .. ........ ........ . ....... . ......... . . . ....... . ....... . ....... . . . .. . . . . . . .. .. .. . . . . ... . ... . . . . . ... ... ... . . ... ... ... ... ... . . . . . ... ... ... ... ... . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . 4. . 4. . 4. . 4. . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . ..... 3 . ..... 3 . ..... 3 . ..... 3 . ..... .. .. .. .. . . . . . . . . . . ... .. .. .. .. . .. . . . . . .. .. .. .. . . . . . . .. . . . . .. .. .. . . . . . . . . . .. . .. ........ ......... ......... ........ ......... ..... ..... ... ......... .. .. .. ......... . . . . . ........... ........... ........... ........... ........... ... . . .. . . ... .... ..... .. . .. .... .. .. ... .... .. .. .. .. ... ... ... ... .. ... ... ... ... ... .. .. .. .. .. . ...................... .. ........................ ..... .. ... . ........ ... .... ... ....... ... ... . .... . .... .. . . .... ........................ . . . ..................... ....................... ...................... . ....................... . .. . .... . . . .................. . . . . ... . .................. . .................. . . . . . .................. . . . ..................... ... ..... . . . ..... ... . ..... ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . . . .. . .. . . . . . . . . .................... ................... .................... ................... ................... ................... . .. ... ... .................... .................... ..................... .................... ...................... . .................... . . . .. . .. ... . .. .. . ... .. ... . ... . .. . .. .. .. . .. .. .. .. .. ... .. ... .. .... . .. .... .. ... .. ... .. .. ... ... .. ....... ....... ....... ... ... ....... .. . .. .. .. ... .. ... ... ... ... ... ... ... ... .. .. .. .. .. .... .. .... ...... .... ...... .. ... .... ...... ... ... ... ........... .......... ....... ... ....... ... .......... .......... .......... ......... ......... ......... ..... ..... .... ... ....

0, 6

0, 4

1, 4

2, 4

0, 2

1, 2

2, 2

3, 2

4, 2

0, 0

1, 0

2, 0

3, 0

4, 0

5, 0

6, 0

. ............................ ........................... . .

PCTI

4 27

2 3

4 3

4 3

4 3

8 3

8 3

16 9

8 9

4 3

2 3

4 15

4 45

Figure 7.4: Example 7.3.3: Six channels are oered both a Poisson trac stream (PCTI) (horizontal states) and an Engset trac stream (PCTII) (vertical states). The parameters are specied in Tab. 7.1. If we allocate state (0, 0) the relative probability one, then we nd by exploiting local balance the relative state probabilities q(x1 , x2 ) shown below the state transition diagram.

198 p(x1 , x2 ) x2 x2 x2 x2 =6 =4 =2 =0

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS x1 = 0 x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 p( , j) 0.0073 0.0331 0.0661 0.0496 0.1561 0.0661 0.1322 0.0992 0.2975 0.0661 0.1322 0.0992 0.2975 0.0073 0.1653 0.4627 0.3647 1.0000

0.0881 0.0661 0.1542

0.0441 0.0331 0.0771

0.0132 0.0132

0.0044 0.0044

p(i, )

Table 7.2: Detailed state probabilities for the system specied in Table 7.1. p(0) = p(0, 0) = 0.0496 p(1) = p(1, 0) = 0.0992 p(2) = p(0, 2) + p(2, 0) = 0.1653 p(3) = p(1, 2) + p(3, 0) = 0.1983 p(4) = p(0, 4) + p(2, 2) + p(4, 0) = 0.1983 p(5) = p(1, 4) + p(3, 2) + p(5, 0) = 0.1675 p(6) = p(0, 6) + p(2, 4) + p(4, 2) + p(6, 0) = 0.1219 Table 7.3: Global state probabilities for the system specied in Table 7.1.
are identical. We nd the time congestion E1 : E1 = p(6, 0) + p(4, 2) + p(2, 4) + p(0, 6) = p(6) , E1 = B1 = C1 = 0.1219 , Y1 = 1.7562 . Performance measures for stream 2 (PCT-II trac): Time congestion E2 (proportion of time the system is blocked for stream 2) becomes: E2 = p(0, 6) + p(1, 4) + p(2, 4) + p(3, 2) + p(4, 2) + p(5, 0) + p(6, 0) = p(5) + p(6) , E2 = 0.2894 . Call congestion B2 (Proportion of call attempts blocked for stream 2): The total number of call attempts per time unit is obtained from the marginal distribution in

7.3. MULTI-DIMENSIONAL LOSS SYSTEMS


Table 7.2:
6

199

xt =
i=0

2 (i) p(, i)

3 2 1 4 0.3647 + 0.4627 + 0.1653 + 0.0073 3 3 3 3

= 1.0616 . The number of blocked call attempts per time unit becomes (Fig. 7.4): x = 3 2 1 4 {p(5, 0) + p(6, 0)} + {p(3, 2) + p(4, 2)} + {p(1, 4) + p(2, 4)} + p(0, 6) 3 3 3 3

= 0.2462 . Hence: B2 =

x = 0.2320 . xt

Trac congestion C2 (Proportion of oered trac blocked): The carried trac, measured in the unit [channel], is obtained from the marginal distribution in Table 7.2:
6

Y2 =
i=0

i p(, i) ,

Y2 = 2 0.4627 + 4 0.1653 + 6 0.0073 , Y2 = 1.6306 erlang .

The oered trac, measured in the unit [channel ], is d2 A2 = 2 erlang (Tab. 7.1). Hence we get: C2 = 2 1.6306 = 0.1848 . 2 2

The above example has only 2 streams and 6 channels, and the total number of states equals 16 (Fig. 7.4). When the number of trac streams and channels increase, then the number of states increases very fast and we become unable to evaluate the system by calculating the individual state probabilities. In the following section we introduce two classes of algorithms for loss systems which eliminates this problem by aggregation of states.

200

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

1 , Z 1 , d 1

.. . ..............

2 , Z 2 , d 2

. . .

. . .

N , Z N , d N

Li Local exchanges

... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ... ..... ..... ... ... ..... ..... ... ..... ... ..... ... ... ..... ..... ... ..... ... ..... ... ..... ... ..... ... ..... . ..... ..... ..... ..... ..... ..... ... ..... ... ........ ........ ...... ...... ............................................................................................ .. . .. .......................................................................................... . ... ..... ..... . ..... ..... ..... . . ..... ..... . . ..... ..... ..... .... .. .. ..... ..... ..... ..... ..... . .. ..... ..... ..... ..... .... .... .... .... ..... ..... . .. ..... ..... .... ..... ..... ..... ..... .....

n1 n2

nN

Transit exchange

Destination exchange

Figure 7.5: Generalization of the classical teletrac model to BPPtrac and multi-rate trac. The parameters j and Zj describe the BPPtrac, and dj denotes the number of slots required per connection.

7.4

Convolution Algorithm for loss systems

We now consider a trunk group with a total of n homogeneous channels. Being homogeneous means that they have the same service rate. The channel group is oered N dierent services, also called streams, or classes. A call (connection) of type i requires di channels (slots) during the whole service time, i.e. all di channels are occupied and released simultaneously. If less than di channels are idle, then the call attempt is blocked (BCC = blocked calls cleared). We dene the state of the system {x1 , x2 , . . . , xN } where xj is the number of channels occupied by type j which must fulll the restrictions (7.18) and (7.19). The arrival processes are general state-dependent Poisson processes. For the jth arrival process the arrival intensity in state xj = ij dj , when ij calls (connections) of type j are being served, is j (ij ). We may restrict the number ij of simultaneous calls of type j so that: 0 xj = ij dj nj n . It will be natural to require that xj is an integral multiple of di , i.e. xj /dj = ij . This model describes for example the system shown in Fig. 7.5. The above system fullls the conditions for reversibility and product form: p(x1 , x2 , , xN ) = p1 (x1 ) p2 (x2 ) . . . pN (xN ) , where the restrictions (7.18) and (7.19) must be fullled. Product-form is equivalent to independence between the state probabilities of the trac streams and therefore we may

7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS

201

convolve the trac streams to get the global state probability. To aggregate the trac streams we should express the states in the same bandwidth unit which we call the Basic Bandwidth Unit (BBU). Here a BBU is one channel. We thus express the state probability of stream j as: pj = {pj (0), pj (1), pj (2), . . . , pj (nj )} , where pj (i) = 0 when i = k dj , k = 0, 1, . . . , nj /dj .

The system mentioned above can be evaluated in an ecient way by the convolution algorithm rst introduced in (Iversen, 1987 [45]).

7.4.1

The convolution algorithm

The algorithm is described by the following three steps: Step 1: Onedimensional state probabilities: Calculate the state probabilities of each trac stream as if it is alone in the system, i.e. we consider classical loss systems as described in Chaps. 4 & 5. For trac stream j we nd: (7.21) pj = {pj (0), pj (1), . . . , pj (nj )} , j = 1, 2, . . . , N . Only the relative values of pj (x) are of importance, so we may choose qj (0) = 1 and calculate the values of qj (xj ) relative to qj (0). If during the recursion a term qj (xj ) becomes greater than K (e.g. 1010 ), then we may divide all values qj (xj ), 0 xj x, by K and calculate the following values relatively to these re-scaled values. To avoid any numerical problems in the following it is advisable to normalize the relative state probabilities so that: qj (x) pj (x) = , Qj
nj

x = 0, 1 . . . , nj ,

Qj =
i=0

qj (i) .

As described in Sec. 4.4 we may normalize at each step to avoid any numerical problems. Step 2: Aggregation of trac streams: By successive convolutions (convolution operator ) we calculate the aggregated state probabilities for the total system excepting trac stream number j: qN/j = qN/j (0), qN/j (1), . . . , qN/j (n) . (7.22)

= p1 p2 pj1 pj+1 pN

202

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS We rst convolve p1 and p2 and obtain p12 which is convolved with p3 to obtain p123 , and so on. Both the commutative and the associative laws are valid for the convolution operator, dened in the usual way (Sec. 2.3):
1 u

pi pj =

pi (0) pj (0),

x=0

pi (x) pj (1 x), , u = min{ni + nj , n} .

x=0

pi (x) pj (u x) ,

(7.23)

where we stop at (7.24) Notice, that we truncate the state space at state u. Even if pi and pj are normalized, then the result of a convolution is in general not normalized due to the truncation. It is recommended to normalize after every convolution to avoid any numerical problems both during this step and the following. Step 3: Performance measures: Above we have reduced the state space to two trac streams: pN/j and pj , and we have product form between these. Thus the problem is reduced to a two-dimensional state transition diagram as e.g shown in Fig. 7.3. For stream j we know the state probabilities, arrival rate, and departure rate in every state. For the aggregated stream pN/j we only know the state probabilities; the transition rates between the states are complex and we dont need them in the following. We calculate time congestion Ej , call congestion Bj , and trac congestion Cj of stream j from the reduced two-dimensional state-transition diagram. This is done during the convolution: pN = qN/j pj . This convolution results in:
x x

qN (x) =
xj =0

qN/j (x xj ) pj (xj ) =

xj =0

pj (xj | x) ,

(7.25)

where for pj (xj | x), x is the total number of busy channels, and xj is the number of channels occupied by stream j. Steps 2 3 are repeated for every trac stream. In the following we derive formul for Ej , Bj , and Cj . Time congestion Ej for trac stream j becomes: Ej = where SE j = {(xj , x) | xj x n (xj > nj dj ) (x > n di )} , The summation over SE j is extended to all states (xj , x) where calls belonging to class j are blocked. The set {xj > nj dj } corresponds to the states where trac stream j has utilized 1 Q xS pj (xj | x) . (7.26)

Ej

7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS QN/j (n) QN/j (n 1) QN/j (n 2) ... ... QN/j (n nj ) ... ... QN/j (2) QN/j (1) QN/j (0) p(0 | n) 0 p(1 | n) ... p(1 | n 1) p(1 | nnj +2) 0 0 ... p(2 | n) ... ... ... ... 0 0 0 ... 0 p(nj | n) ...

203

p(0 | n 1) p(0 | n 2) ... p(0 | nnj +1) p(0 | n nj ) ...

p(0 | nnj 1) p(1 | n nj ) ... p(0 | 2) p(0 | 1) p(0 | 0) pj (0) p(1 | 3) p(1 | 2) p(1 | 1) pj (1)

p(1 | nnj +1)

p(2 | nnj +3) . . . p(2 | nnj +1) p(2 | 4) p(2 | 3) p(2 | 2) pj (2) ...

p(2 | nnj +2) . . . ... ... ... ... ...

...

p(nj | n 1 p(nj | nj +2) p(nj | nj +1) p(nj | nj ) pj (nj )

Table 7.4: Convolution algorithm. Exploiting product form we convolve QN (x) and pj (x) to obtain
the global distribution adding contributions in the diagonals and normalize. During this convolution we obtain the detailed performance measures for stream j. Rows have a xed number of channels occupied by by the aggregated streams N/j and columns have a xed number xj of channels occupied by stream j.

its quota, and (x > n dj ) corresponds to states with less than dj idle channels. Q is the normalization constant:
n

Q=
i=0

qN (i) .

At this stage we usually have normalized the state probabilities so that Q = 1. The truncated state space is shown in Table 7.4, and the global state probability
i

qN (i) =
k=0

qN/j (k) qj (i k)

is the total probability mass on diagonal i. Call congestion Bj for trac stream j is the ratio between the number of blocked call attempts for trac stream j and the total number of call attempts for trac stream j, both for example per time unit. We nd: Bj =
SE j j (xj ) pj (xj | x) n x x=0 xj =0 j (xj ) pj (xj |

x)

(7.27)

204

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

Trac congestion Cj for trac stream j: We dene as usual the oered trac as the trac carried by an innite trunk group. The carried trac for trac stream j is:
n x

Yj =
x=0 xj =0

xj pj (xj | x) .

(7.28)

Thus we nd: Cj =

Aj Yj . Aj

Above we have included states which are outside the state space and takes the value zero. Thus we can nd the detailed performance measures for stream j because we know arrival rate and service rate of stream j for every state in the reduced state transition diagram in Table 7.4. For the aggregated stream we are able to calculate the total carried trac and thus the aggregated trac congestion. But we are not able to calculate time congestion or call congestion, because we dont know state transitions for the aggregated stream N/j. We only know the state probabilities and that the product form is valid. The algorithm was rst implemented in the PC-tool ATMOS (ListovSaabye & Iversen, 1989 [83]). The storage requirements are proportional to n as we may calculate the state probabilities of a trac stream when it is needed. In practice we use a storage proportional with n N , because we save intermediate results of the convolutions for later re-use. It can be shown (Iversen & Stepanov, 1997 [47]) that we need (4 N6) convolutions when we calculate trac characteristics for all N trac streams. Thus the calculation time is linear in N and quadratic in n.

Example 7.4.1: De-convolution In principle we may obtain qN/j from qN by de-convolving pj and then calculate the performance measures during the re-convolution of pj and qN/j . In this way we need not repeat all the convolutions (7.22) for each trac stream. However, when implementing this approach we get numerical problems. The convolution is from a numerical point of view very stable, and therefore the deconvolution will be unstable. Nevertheless, we may apply de-convolution in some cases, for instance when the trac sources are on/osources. 2

Example 7.4.2: Three trac streams We rst illustrate the algorithm with a small example, where we go through the calculations in every detail. We consider a system with 6 channels and 3 trac streams. In addition to the two streams in Example 7.3.3 we add a Pascal stream with class limitation as shown in Tab. 7.5 (cf. Example 5.7.1). We want to calculate the performance measures of trac stream 3. Step 1: We calculate the state probabilities pj (x), (x = 1, 2, . . . , nj ) of each trac stream j (j = 1, 2, 3) as if it were alone. The results are given in Tab. 7.6.

7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS Stream 3: Pascal trac (Negative Binomial) S3 = 2 sources 3 = 1/3 calls/time unit 3 = 1 (time unit1 ) 3 = 3 /3 = 1/3 erlang per idle source Z3 = 1/(1 + 3 ) = 3/2 d3 = 1 channels/call A3 = S3 (1 Z3 ) = 1 erlang n3 = 4 (max. # of simultaneous calls)

205

Table 7.5: A Pascal trac stream (Example 5.7.1) is oered to the same trunk as the two trac streams of Tab. 7.1.
Step 2: We evaluate the convolution of p1 (x1 ) with p2 (x2 ), p1 p2 (x12 ), truncate the state space at n = 6, and normalize the probabilities so that we obtain p12 shown in the Tab. 7.6. Notice that this is the result obtained in Example 7.3.3. Step 3: We convolve p12 (x12 ) with p3 (x3 ), truncate at n, and obtain q123 (x123 ) as shown in Tab. 7.6.

State x 0 1 2 3 4 5 6 Total

Probabilities p1 (x) 0.1360 0.2719 0.2719 0.1813 0.0906 0.0363 0.0121 p2 (x) 0.3176 0.0000 0.4235 0.0000 0.2118 0.0000 0.0471

q12 (x) Normal. 0.0432 0.0864 0.1440 0.1727 0.1727 0.1459 0.1062 0.8711 p1 p2 p12 (x) 0.0496 0.0992 0.1653 0.1983 0.1983 0.1675 0.1219 1.0000

Prob. p3 (x) 0.4525 0.3017 0.1508 0.0670 0.0279 0.0000 0.0000 1.0000

q123 (x) Normal. p12 p3 0.0224 0.0599 0.1122 0.1579 0.1825 0.1794 0.1535 0.8678 p123 (x) 0.0259 0.0689 0.1293 0.1819 0.2104 0.2067 0.1769 1.0000

1.0000 1.0000

Table 7.6: Convolution algorithm applied to Example 7.4.2. The state probabilities for the individual trac streams have been calculated in the examples 4.5.1, 5.5.1 and 5.7.1.

Time congestion E3 is obtained from the detailed state probabilities. Trac stream 3 (singleslot trac) experiences time congestion, both when all six channels are busy and when the trac stream

206

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

occupies 4 channels (maximum allocation). From the detailed state probabilities we get: q123 (6) + p3 (4) {p12 (0) + p12 (1)} 0.8678 0.1535 + 0.0279 {0.0496 + 0.0992} , 0.8678

E3 = =

E3 = 0.1817 . Notice that the state {p3 (4) p12 (2)} is included in state q123 (6). The carried trac for trac stream 3 is obtained during the convolution of p3 (i) and p12 (j) and becomes: 1 0.8678
4 x3 =1 6x12

Y3 =

x3 p3 (x3 )

p12 (j)
x12 =0

Y3 =

0.6174 = 0.7115 . 0.8678

As the oered trac is A3 = 1, we get: Trac congestion: C3 = 1 0.7115 , 1

C3 = 0.2885 . The call congestion becomes: B3 = x , xt

where x is the number of lost calls per time unit, and xt is the total number of call attempts per time unit. Using the normalized probabilities from Tab. 7.6 we get {3 (i) = (S3 i) 3 }: x = 3 (0) {p3 (0) p12 (6)} + 3 (1) {p3 (1) p12 (5)} + 3 (2) {p3 (2) p12 (4)} + 3 (3) {p3 (3) p12 (3)} + 3 (4) p3 (4) {p12 (2) + p12 (1) + p12 (0)} ,

7.4. CONVOLUTION ALGORITHM FOR LOSS SYSTEMS


x = 0.2503 .
6

207

xt = 3 (0) p3 (0)

p12 (j)
j=0 5

+ 3 (1) p3 (1)

p12 (j)
j=0 4

+ 3 (2) p3 (2)

p12 (j)
j=0 3

+ 3 (3) p3 (3)

p12 (j)
j=0 2

+ 3 (4) p3 (4) xt = 1.1763 . We thus get: B3 =

p12 (j) ,
j=0

x = 0.2128 . xt

In a similar way by interchanging the order of convolving trac streams we nd the performance measures of stream 1 and 2. The total number of micro-states in this example is 47. By the convolution method we reduce the number of states so that we never need more than two vectors of each n+1 states, i.e. 14 states. By using the ATMOStool we get the following results shown in Tab. 7.7 and Tab. 7.8. The total congestion can be split up into congestion due to class limitation (ni ), and congestion due to the limited number of channels (n). 2

Input Oered trac j 1 2 3 Aj 2.0000 1.0000 1.0000 Peaked ness Zj 1.00 0.75 1.50

Total number of channels n = 6 Maximum allocation nj 6 6 4 Slot size dj 1 2 1 Mean holdding time 1 j 1.00 1.00 1.00 Sources Sj 4 -2 beta j 0 0.3333 -0.3333

Table 7.7: Input data to ATMOS for Example 7.4.2 with three trac streams.

208
Output j 1 2 3 Total Call congestion Bj 1.769 200E-01 3.346 853E-01 2.127 890E-01

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS


Trac congestion Cj 1.769 200E-01 2.739 344E-01 2.884 898E-01 2.380 397E-01 Time congestion Ej 1.769 200E-01 3.836 316E-01 1.817 079E-01 Carried trac Yj 1.646 160 1.452 131 0.711 510 3.809 801

Table 7.8: Output data from ATMOS for the input data in Tab. 7.7.
Example 7.4.3: Large-scale example To illustrate the tool ATMOS we consider in Tab. 7.9 and Tab. 7.10 an example with 1536 trunks and 24 trac streams. We notice that the time congestion is independent of peakedness Zj and proportional to the slot-size dj , because we often have: p(n) p(n 1) . . . p(n dj ) for dj n. (7.29)

This is obvious as the time congestion only depends on the global state probabilities. The call congestion is almost equal to the time congestion. It depends weakly upon the slot-size. This is also to be expected, as the call congestion is equal to the time congestion with one source removed (arrival theorem). In the table with output data we have in the rightmost column shown the relative trac congestion divided by (dj Zj ), using the single-slot Poisson trac as reference value (dj = Zj = 1). We notice that the trac congestion is proportional to dj Zj , which is the usual assumption when using the Equivalent Random Trac (ERT) method (Sec. 6.4.3). The mean value of the oered trac increases linearly with the slot-size, whereas the variance increases with the square of the slot-size. The peakedness (variance/mean) ratio for multi-rate trac thus increases linearly with the slot-size. We thus notice that the trac congestion is much more relevant than the time congestion and call congestion for characterizing the performance of the system. Below in Example 7.5.1 we calculate the total trac congestion using Fredericks & Haywards method for multi-rate trac (Sec. 7.5). 2

7.5

Fredericks-Haywardss method

Basharin & Kurenkov has extended Fredericks-Haywards method (Sec. 6.5) to include multislot (multi-rate) trac. Let every connection require d channels during the whole holding time from start to termination. Then by splitting this trac into d identical sub-streams (Sec. 6.4) each call will use a single channel in each of the d sub-groups, and we will get d identical systems with single-slot trac. If a call uses 1 channels instead of d channels, then the mean value becomes d times smaller and the variance d 2 times smaller (change of scale, Example 2.3.3). Therefore, the peakedness becomes d times smaller. If furthermore the arrival process has a peakedness Z, then by

7.5. FREDERICKS-HAYWARDSS METHOD

209

Input Oered traf. j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Aj 64.000 64.000 64.000 64.000 64.000 64.000 32.000 32.000 32.000 32.000 32.000 32.000 16.000 16.000 16.000 16.000 16.000 16.000 8.000 8.000 8.000 8.000 8.000 8.000 Zj 0.200 0.500 1.000 2.000 4.000 8.000 0.200 0.500 1.000 2.000 4.000 8.000 0.200 0.500 1.000 2.000 4.000 8.000 0.200 0.500 1.000 2.000 4.000 8.000

Total # of channels n = 1536 Peakedness Max. sim. # Channels/call nj 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 1536 dj 1 1 1 1 1 1 2 2 2 2 2 2 4 4 4 4 4 4 8 8 8 8 8 8 mht j 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 Sources Sj j 80.000 4.000 128.000 1.000 0.000 -64.000 -0.500 -21.333 -0.750 -9.143 -0.875 40.000 4.000 64.000 1.000 0.000 -32.000 -0.500 -10.667 -0.750 -4.571 -0.875 20.000 4.000 32.000 1.000 0.000 -16.000 -0.500 -5.333 -0.750 -2.286 -0.875 10.000 4.000 16.000 1.000 0.000 -8.000 -0.500 -2.667 -0.750 -1.143 -0.875

Table 7.9: Input data for Example 7.4.3 with 24 trac streams and 1536 channels. The maximum
number of simultaneous calls of type j (nj ) is in this example n = 1536 (full accessibility), and mht is an abbreviation for mean holding time.

210

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

Output Call congestion j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Total Bj 6.187 744E-03 6.202 616E-03 6.227 392E-03 6.276 886E-03 6.375 517E-03 6.570 378E-03 1.230 795E-02 1.236 708E-02 1.246 554E-02 1.266 184E-02 1.305 003E-02 1.379 446E-02 2.434 998E-02 2.458 374E-02 2.497 245E-02 2.574 255E-02 2.722 449E-02 2.980 277E-02 4.766 901E-02 4.858 283E-02 5.009 699E-02 5.303 142E-02 5.818 489E-02 6.525 455E-02

Trac congestion Time congestion Cj 1.243 705E-03 3.110 956E-03 6.227 392E-03 1.247 546E-02 2.502 346E-02 5.025 181E-02 2.486 068E-03 6.222 014E-03 1.246 554E-02 2.500 705E-02 5.023 347E-02 1.006 379E-01 4.966 747E-03 1.244 484E-02 2.497 245E-02 5.019 301E-02 1.006 755E-01 1.972 682E-01 9.911 790E-03 2.489 618E-02 5.009 699E-02 1.007 214E-01 1.981 513E-01 3.583 491E-01 5.950 135E-02 Ej 6.227 392E-03 6.227 392E-03 6.227 392E-03 6.227 392E-03 6.227 392E-03 6.227 392E-03 1.246 554E-02 1.246 554E-02 1.246 554E-02 1.246 554E-02 1.246 554E-02 1.246 554E-02 2.497245E-02 2.497 245E-02 2.497 245E-02 2.497 245E-02 2.497 245E-02 2.497 245E-02 5.009 699E-02 5.009 699E-02 5.009 699E-02 5.009 699E-02 5.009 699E-02 5.009 699E-02

Carried trac Yj 63.920 403 63.800 899 63.601 447 63.201 570 62.398 499 60.783 884 63.840 892 63.601 791 63.202 205 62.399 549 60.785 058 57.559 172 63.682128 63.203 530 62.401 763 60.787 647 57.556 771 51.374 835 63.365 645 62.406 645 60.793 792 57.553 828 51.318 316 41.065 660 1444.605

Rel. value Cj /(dj Zj ) 0.9986 0.9991 1.0000 1.0017 1.0046 1.0087 0.9980 0.9991 1.0009 1.0039 1.0083 1.0100 0.9970 0.9992 1.0025 1.0075 1.0104 0.9899 0.9948 0.9995 1.0056 1.0109 0.9942 0.8991

Table 7.10: Output for Example 7.4.3 with input data given in Tab. 7.9. As mentioned earlier in
Example 7.5.1, Fredericks-Haywards method results in a total congestion equal to 6.114 %. The total trac congestion 5.950 % is obtained from the total carried trac and the oered trac.

7.6. STATE SPACE BASED ALGORITHMS

211

splitting into d Z trac streams the trac process becomes a single-slot trac process with peakedness one, which we evaluate by Erlangs B-formula. (n, A, Z, d) n A , , 1, 1 dZ dZ n A , , 1, d Z Z n A , , Z, 1 d d n, A , 1, d Z Z . (7.30)

The last equivalence show that by increasing number the bandwidth d by the factor Z, we may keep the number of channels n constant and get an arrival process with Z = 1. If we have more trac streams oered to the same group, then we may keep the number of channels xed. The bandwidth d Z is in general not integral. Then we should choose a basic bandwidth unit (BBU)so that both n and dZ approximately become integral multiples of this unit. The smaller the bandwidth unit (granularity) is chosen, the better the approximation becomes. However, it is recommended to aggregate all trac streams into one single-slot Poisson trac stream and calculate the total trac congestion. Then this may be split up into trac congestion for each stream as shown in Example 7.4.3
Example 7.5.1: Multi-slot trac In example 7.4.3 we consider a trunk group with 1536 channels, which is oered 24 trac streams with individual slot-size and peakedness. The exact total trac congestion is equal to 5.950%. If we calculate the peakedness of the oered trac by adding all trac streams, then we nd peakedness Z = 9.8125 and a total mean value equal to 1536 erlang. Fredericks & Haywards method results in a total trac congestion equal to 6.114%, which thus is a conservative estimate (worst case) of the theoretical value 5.950 2

7.6

State space based algorithms

The convolution algorithm is based on aggregation of trac streams, where we end up with a trac stream which is the aggregation of all trac streams except the one which we are interested in. Another approach is to aggregate the state space into global state probabilities.

7.6.1

Fortet & Grandjean (Kaufman & Robert) algorithm

In case of Poisson arrival processes the algorithm becomes very simple by generalizing (7.11). Let pj (x) denote the contribution of stream j to the global state probability p(x):
N

p(x) =
j=1

pj (x) .

(7.31)

212

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

Thus the average number of channels occupied by stream j when the system is in global state x is x pj (x). Let trac stream j have the slot-size dj . Due to reversibility we will have local balance for every trac type. The local balance equation becomes: x pj (x) j , x = dj , dj + 1, . . . n . (7.32) j p(x dj ) = dj The left-hand is the ow from global state [ x dj ] to state [ x ] due to arrivals of type j. The right-hand side is the ow from state [ x ] to state [ x dj ] due to departures of type j calls. The average number of channels occupied by stream j in global state x is not an integer because it is a weighted sum over more state probabilities. From (7.32) we get: 1 pj (x) = dj Aj p(x dj ) . (7.33) x The total state probability p(x) is obtained by summing up over all trac streams (7.31): 1 p(x) = x
N

j=1

dj Aj p(x dj ) ,

p(x) = 0 for x < 0 .

(7.34)

This is Fortet & Grandjeans algorithm (Fortet & Grandjean, 1964 [33]) The algorithm is usually called Kaufman & Roberts algorithm, as it was re-discovered by these authors in 1981 (Kaufman, 1981 [66]) (Roberts, 1981 [103]).

7.6.2

Generalized algorithm

The above model can easily be generalized to BPP-trac (Iversen, 2005 [49]) x dj x pj (x) j = p(x dj ) Sj j pj (x dj ) j . dj dj (7.35)

On the right-hand side the rst term assumes that all type j sources are idle during one time unit. As we know xdj pj (x dj ) dj type j sources on the average are busy in global state x dj we reduce the rst term with the second term to get the right value. Thus we get: x<0 0 p(0) x=0 p(x) = (7.36) N pj (x) x = 1, 2, . . . , n
j=1

where

pj (x) =

x dj j dj Sj j p(x dj ) pj (x dj ) x j x j x < dj .

(7.37) (7.38)

pj (x) = 0

7.6. STATE SPACE BASED ALGORITHMS The state probability p(0) is obtained by the normalization condition:
n n N

213

p(i) = p(0) +
i=0 i=1 i=1

pj (i) = 1 ,

(7.39)

as pj (0) = 0, whereas p(0) = 0. Above we have used the parameters (Sj , j ) to characterize the trac streams. Alternatively we may also use (Aj , Zj ) related to (Sj , j ) by the formul (5.22) (5.25). Then (7.37) becomes: pj (x) = x dj 1 Z j dj A j p(x dj ) pj (x dj ) x Zj x Zj (7.40)

For Poisson arrivals we of course get (7.34). In practical evaluation of the formula we will use normalization in each step as described in Sec. 4.4.1. This results in a very accurate and eective algorithm. In this way also the number of operations and the memory requirements become very small, as we only need to store the di previous state probabilities of trac stream i, and the max{di } previous values of the global state probabilities. The number of operations is linear in number of channels and number of trac streams and thus extremely eective.

Performance measures By this algorithm we are able to obtain performance measures for each individual trac stream. Time congestion: Call attempts of stream j require dj idle channel and will be blocked with probability:
n

Ej =
i=ndj +1

p(i) .

(7.41)

Trac congestion: From the state probabilities pj (x) we get the total carried trac of stream j:
n

Yj =
x=1

i pj (x) .

(7.42)

Thus the trac congestion of stream j becomes: Cj = The total carried trac is Y =
j=1

Aj dj Yj . Aj dj
N

(7.43)

Yj ,

(7.44)

214

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS

so the total trac congestion becomes: AY , A where A is the total oered trac measured in channels: C=
N

(7.45)

A=
j=1

dj A j .

Call congestion: This is obtained from the trac congestion by using (5.49): Bj = (1 + j ) Cj . 1 + j C j (7.46)

The total call congestion cannot be obtained by this formula as we do not have a global value of . But from individual carried trac and individual call congestion we may nd the total number of oered calls and accepted calls for each stream, and from this we get the total call congestion.

Example 7.6.1: Generalized algorithm We evaluate Example 7.3.3 by the general algorithm. For the Poisson trac (stream 1) we have d = 1, A = 2, and Z = 1. We thus get: 2 q1 (x 1) , q1 (0) = 0 , q(0) = 1 . x The total relative state probability is q(x) = q1 (x) + q2 (x). For the Engset trac (stream 2) we have d = 2, A = 1, and Z = 0.75. We then get: q1 (x) = q2 (x) = 2 1 x2 1 q(x 2) q2 (x 2) , x 0.75 x 3 q2 (0) = q2 (1) = 0 .

Table 7.11 shows the non-normalized relative state probabilities when we let state zero equal to one. Table 7.12 shows the normalized state probabilities and the carried trac of each stream in each state. In a computer program we would normalize state probabilities after each iteration (increasing number of channels by one) and calculate the aggregated carried trac for each stream. This trac value should of course also be normalized in each step. In this way we only need to store the previous di values and the carried trac of each trac stream. We get the following performance measures, which of course are the same as obtained by convolution algorithm. E1 = p(6) E2 = p(5) + p(6) C1 C2 = =
211.7562 21 121.6306 12 (1+0)0.1219 1+00.1219 (1+1/3)0.1847 1+(1/3)0.1847

= 0.1219 = 0.2894 = 0.1219 = 0.1847 = 0.1219 = 0.2320

B1 = B1 =

7.6. STATE SPACE BASED ALGORITHMS

215

State x
2 x

Poisson q1 (x) q(x1) = q1 (x) 0 1 2


10 3 2 x

Engset q2 (x)
4 3

Total q(x) q2 (x2) = q2 (x) 0 0 1 2


10 3

0 1 2 3 4 5 6 Total
2 1 2 2 2 3 2 4 2 5 2 6

q(x2)

x2 x

1 3

= = = = = =

2 2
20 9 2 2 2 3 2 4 2 5 2 6

4 4
152 45

2
8 5 152 135

4 3 4 3 4 3 4 3 4 3

1 2
10 3

4 4

0 2 1 3 2 4 3 5 4 6

1 3 1 3 1 3 1 3 1 3

0 0
4 3 16 9

= = = = =

4 3 16 9

4 4
152 45 332 135 2723 135

2
16 9 180 135

Table 7.11: Example 7.6.1: relative state probabilities for Example 7.3.3 evaluated by the generalized algorithm.

State x 0 1 2 3 4 5 6 Total

Poisson p1 (x) 0.0000 0.0992 0.0992 0.1102 0.0992 0.0793 0.0558 x p1 (x) 0.0000 0.0992 0.1983 0.3305 0.3966 0.3966 0.3349 1.7562

Engset p2 (x) 0.0000 0.0000 0.0661 0.0881 0.0992 0.0881 0.0661 x p2 (x) 0.0000 0.0000 0.1322 0.2644 0.3966 0.4407 0.3966 1.6306 p(x)

Total 0.0496 0.0992 0.1653 0.1983 0.1983 0.1675 0.1219 1.0000 x p(x) 0.0000 0.0992 0.3305 0.5949 0.7932 0.8373 0.7315 3.3867

Table 7.12: Example 7.6.1: absolute state probabilities and carried trac yi (x) = x pi (x) for Example 7.3.3 evaluated by the generalized algorithm.

216

CHAPTER 7. MULTI-DIMENSIONAL LOSS SYSTEMS


2

7.6.3

Batch Poisson arrival process

When we have more trac streams the state-based algorithm is modied by exploiting the analogy with the Pascal distribution. Inserting A (5.76) and Z (5.77) we get: pj (x) = dj j xdj p(xdj ) + (1 pj ) pj (x dj ) , x j x 0 x n, (7.47)

where pj , j and j are parameters of the Batched Poisson process. Thus the state-based algorithm for BPP (Binomial, Poisson, Pascal) is generalized to include Batch Poisson process in a simple way. This section is to be elaborated in further details, in particular the performance measures.

7.7

Final remarks

The convolution algorithm for loss systems was rst published in (Iversen, 1987 [45]). A similar approach to a less general model was published in two papers by Ross & Tsang (1990 [105]), (1990 [106]) without reference to this original paper from 1987 even though it was known by the authors. The generalized algorithm in Sec. 7.6.2 is new (Iversen, 2007 [50]) and includes Delbroucks algorithm (Delbrouck, 1983 [23]) which is more complex to evaluate. Compared with all other algorithms the generalized algorithm requires much less memory and operations to evaluate. By normalizing the state probabilities in each iteration we get a very accurate and simple algorithm. In principle, we may apply the generalized algorithm for BPPtrac to calculate the global state probabilities for (N1) trac streams and then use the convolution algorithm to calculate the performance measures for the remaining trac stream we want to evaluate. The convolution algorithm allows for minimum and maximum allocation of channels to each trac stream, but it does not allow for restrictions based on global states. It also allows for arbitrary state-dependent arrival processes. The generalized algorithm does not keep account of the number of calls of the individual trac stream, but allows for restrictions based on global states, e.g. trunk reservation.
Updated 2010-03-23

Chapter 8 Dimensioning of telecom networks


Network planning includes designing, optimizing, and operating telecommunication networks. In this chapter we will consider trac engineering aspects of network planning. In Sec. 8.1 we introduce trac matrices and the fundamental double factor method (Kruithofs method) for updating trac matrices according to forecasts. The trac matrix contains the basic information for choosing the topology (Sec. 8.2) and trac routing (Sec. 8.3). In Sec. 8.4 we consider approximate calculation of end-to-end blocking probabilities, and describe the Erlang x-point method (reduced load method). Sec. 8.5 generalizes the convolution algorithm introduced in Chap. 7 to networks with exact calculation of end-to-end blocking in virtual circuit switched networks with direct routing. The model allows for multislot BPP trac with minimum and maximum allocation. The same model can be applied to hierarchical cellular wireless networks with overlapping cells and to optical WDM networks. In Sec. 8.6 we consider service-protection mechanisms. Finally, in Sec. 8.7 we consider optimizing of telecommunication networks by applying Moes principle.

8.1

Trac matrices

To specify the trac demand in an area with K exchanges we should know K 2 trac values Aij (i, j = 1, . . . , K), as given in the trac matrix shown in Tab. 8.1. The trac matrix assumes we know the location areas of exchanges. Knowing the trac matrix we have the following two interdependent tasks: Decide on the topology of the network (which exchanges should be interconnected ?) Decide on the trac routing (how do we exploit a given topology ?)

218

CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS

TO
K

FROM 1 . . . i . . . j . . . K
K

1 A11 . . . Ai1 . . . Aj1 . . .

i A1i . . . Aii . . . Aji . . .

j A1j . . . Aij . . . Ajj . . .

K A1K . . . AiK . . . AjK . . . AKK

Ai =

Aik
k=1

A1 . . . Ai . . . Aj . . . AK
K K

AK1 Akj A 1

AKi A i

AKj A j

A j =

k=1

A K

i=1

Ai =

j=1

A j

The trac matrix has the following elements: Aij = is the trac from i to j. Aii = is the internal trac in exchange i. Ai = is the total outgoing (originating) trac from i. Aj = is the total incoming (terminating) trac to j. Table 8.1: A trac matrix. The total incoming trac is equal to the total outgoing trac.

8.1.1

Kruithof s double factor method

Let us assume we know the actual trac matrix and that we have a forecast for future row sums O(i) (Originating) and column sums T (i) (Terminating), i.e. the total outgoing and incoming trac for each exchange. This trac prognosis may be obtained from subscriber forecasts for the individual exchanges. By means of Kruithofs double factor method (Kruithof, 1937 [77]) we are able to estimate the future individual values Aij of the trac matrix. The procedure is to adjust the individual values Aij , so that they agree with the new row/column sums: S1 Aij Aij , (8.1) S0 where S0 is the actual sum and S1 is the new sum of the row/column considered. If we start by adjusting Aij with respect to the new row sum Si , then the row sums will agree, but the column sums will not agree with the wanted values. Therefore, next step is to adjust the obtained values Aij with respect to the column sums so that these agree, but this implies that

8.1. TRAFFIC MATRICES

219

the row sums no longer agree. By alternatively adjusting row and column sums the values obtained will after a few iterations converge towards unique values. The procedure is best illustrated by an example given below.
Example 8.1.1: Application of Kruithof s double factor method We consider a telecommunication network having two exchanges. The present trac matrix is given as: 1 1 2 Total 10 30 40 2 20 40 60

Total 30 70 100

The prognosis for the total originating and terminating trac for each exchange is:

1 1 2 Total 50

Total 45 105

100

150

The task is then to estimate the individual values of the matrix by means of the double factor method. Iteration 1: Adjust the row sums. We multiply the rst row by (45/30) and the second row by (105/70) and get:

1 1 2 Total 15 45 60

2 30 60 90

Total 45 105 150

The row sums are now correct, but the column sums are not. Iteration 2: Adjust the column sums:

1 1 2 Total 12.50 37.50 50.00

2 33.33 66.67 100.00

Total 45.83 104.17 150.00

220

CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS

We now have the correct column sums, whereas the column sums deviate a little. We continue by alternately adjusting the row and column sums: Iteration 3:

1 1 2 Total 12.27 37.80 50.07

2 32.73 67.20 99.93

Total 45.00 105.00 150.00

Iteration 4:

1 1 2 Total 12.25 37.75 50.00

2 32.75 67.25 100.00

Total 45.00 105.00 150.00

After four iterations both the row and the column sums agree with two decimals.

There are other methods for estimating the future individual trac values Aij , but Kruithofs double factor method has some important properties (Bear, 1988 [5]):

Uniqueness. Only one solution exists for a given forecasts. Reversibility. The resulting matrix can be reversed to the initial matrix with the same procedure. Transitivity. The resulting matrix is the same independent of whether it is obtained in one step or via a series of intermediate transformations, (for instance one 5-year forecast, or ve 1-year forecasts). Invariance as regards the numbering of exchanges. We may change the numbering of the exchanges without inuencing the results. Fractionizing. The single exchanges can be split into sub-exchanges or be aggregated into larger exchanges without inuencing the result. This property is not exactly fullled for Kruithofs double factor method, but the deviations are small.

8.2. TOPOLOGIES

221

8.2

Topologies

In Chap. 1 we have described the basic topologies as star net, mesh net, ring net, hierarchical net and non-hierarchical net.

8.3

Routing principles

This is an extensive subject including i.a. alternative trac routing, load balancing, etc. In (Ash, 1998 [3]) there is a detailed description of this subject.

8.4

Approximate end-to-end calculations methods

If we assume the links of a network are independent, then it is easy to calculate the end-to-end blocking probability. By means of the classical formul we calculate the blocking probability of each link. If we denote the blocking probability of link i by Ei , then we nd the end-to-end blocking probability for a call attempt on route j as follows: Ej = 1 (1 Ei ) , (8.2)

iR

where R is the set of links included in the route of the call. This value will be worst case, because the trac is smoothed by the blocking on each link, and therefore experience less congestion on the last link of a route. For small blocking probabilities we have: Ej Ei .
iR

(8.3)

8.4.1

Fix-point method

A call will usually occupy channels on more links, and in general the trac on the individual links of a network will be correlated. The blocking probability experienced by a call attempt on the individual links will therefore also be correlated. Erlangs x-point method is an attempt to take this into account.

222

CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS

8.5

Exact end-to-end calculation methods

Circuit switched telecommunication networks with direct routing have the same complexity as queueing networks with more chains. (Sec. 12.8) and Tab. 12.3). It is necessary to keep account of the number of busy channels on each link. Therefore, the maximum number of states becomes:
K

(ni + 1) .
i=1

(8.4)

Link 1 2 K

Route 1 d11 d12 d1K 2 d21 d22 d2K N dN 1 dN 2 dN K

Number of channels n1 n2 nK

Table 8.2: In a circuit switched telecommunication network with direct routing dij denoted the slot-size (bandwidth demand) of route j upon link i (cf. Tab. 12.3).

8.5.1

Convolution algorithm

The convolution algorithm described in Chap. 7 can directly be applied to networks with direct routing, because there is product form among the routes. The convolution becomes multi-dimensional, the dimension being the number of links in the network. The truncation of the state space becomes more complex, and the number of states increases very much.

8.6

Load control and service protection

In a telecommunication network with many users competing for the same resources (multiple access) it is important to specify service demands of the users and ensure that the GoS is fullled under normal service conditions. In most systems it can be ensured that preferential subscribers (police, medical services, etc.) get higher priority than ordinary subscribers when they make call attempts. During normal trac conditions we want to ensure that all subscribers for all types of calls (local, domestic, international) have approximately the same

8.6. LOAD CONTROL AND SERVICE PROTECTION

223

service level, e.g. 1 % blocking. During overload situations the call attempts of some groups of subscribers should not be completely blocked and other groups of subscribers at the same time experience low blocking. We aim at the collective misery. Historically, this has been fullled because of the decentralized structure and the application of limited accessibility (grading), which from a service protection point of view still are applicable and useful. Digital systems and networks have an increased complexity and without preventive measures the carried trac as a function of the oered trac will typically have a form similar to the Aloha system (Fig. 3.6). To ensure that a system during overload continues to operate at maximum capacity various strategies are introduced. In stored program controlled systems (exchanges) we may introduce call-gapping and allocate priorities to the tasks (Chap. 10). In telecommunication networks two strategies are common: trunk reservation and virtual channels protection.
T C

service protecting route last choice route

single choice route

primary route = high usage route

Figure 8.1: Alternative trac routing (cf. example 8.6.2). Trac from A to B is partly carried on the direct route (primary route = high usage route), partly on the secondary route via the transit exchange T.

8.6.1

Trunk reservation

In hierarchical telecommunication networks with alternative routing we want to protect the primary trac against overow trac. If we consider part of a network (Fig. 8.1), then the direct trac AT will compete with the overow trac from AB for idle channels on the trunk group AT . As the trac AB already has a direct route, we want to give the trac AT priority to the channels on the link AT. This can be done by introducing trunk (channel) reservation. We allow the ABtrac to access the ATchannels only if there are more than r channels idle on AT (r = reservations parameter). In this way, the trac AT will get higher priority to the ATchannels. If all calls have the same mean holding time (1 = 2 = ) and

224

CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS

PCT-I trac with single slot trac, then we can easily set up a state transition diagram and nd the blocking probability. If the individual trac streams have dierent mean holding times, or if we consider Binomial & Pascal trac, then we have to set up an N -dimensional state transition diagram which will be non-reversible. In some states calls of a type having been accepted earlier in lower states may depart but not be accepted, and thus the process is non-reversible. We cannot apply the convolution algorithm developed in Sec. 7.4 for this case, but the generalized algorithm in Sec. 7.6.2 can easily be modied by letting pi (x) = 0 when x nri . An essential disadvantage by trunk reservation is that it is a local strategy, which only consider one trunk group (link), not the total end-to-end connection. Furthermore, it is a one-way mechanism which protect one trac stream against the other, but not vice-versa. Therefore, it cannot be applied to mutual protection of connections and services in broadband networks.

Example 8.6.1: Guard channels In a wireless mobile communication system we may ensure lower blocking probability to hand-over calls than experienced by new call attempts by reserving the last idle channel (called guard channel) to hand-over calls. 2

8.6.2

Virtual channel protection

In a service-integrated system it is necessary to protect all services mutually against each other and to guarantee a certain grade-of-service. This can be obtained by (a) a certain minimum allocation of bandwidth which ensures a certain minimum service, and (b) a maximum allocation which both allows for the advantages of statistical multiplexing and ensures that a single service do not dominate. This strategy has the fundamental product form, and the state probabilities are insensitive to the service time distribution. Also, the GoS is guaranteed not only on a link basis, but end-to-end.

8.7

Moes principle

Theorem 8.1 Moes principle: the optimal resource allocation is obtained by a simultaneous balancing of marginal incomes and marginal costs over all sectors.

In this section we present the basic principles published by Moe in 1924. We consider a system with some sectors which consume resources (equipment) for producing items (trac). The problem can be split into two parts:

8.7. MOES PRINCIPLE

225

a. Given that a limited amount of resources are available, how should we distribute these among the sectors? b. How many resources should be allocated in total? The principles are applicable in general for all kind of productions. In our case the resources correspond to cables and switching equipment, and the production consists in carried trac. A sector may be a link to an exchange. The problem may be dimensioning of links between a certain exchange and its neighbouring exchanges to which there are direct connections. The problem then is: a. How much trac should be carried on each link, when a total xed amount of trac is carried? b. How much trac should be carried in total? Question a is solved in Sec. 8.7.1 and question b in Sec. 8.7.2. We carry through the derivations for continuous variables because these are easier to work with. Similar derivations can be carried through for discreet variables, corresponding to a number of channels. This is Moes principle (Jensen, 1950 [58]).

8.7.1

Balancing marginal costs

Let us from a given exchange have direct connections to k other exchanges. The cost of a connection to an exchange i is assumed to to be a linear function of the number of channels: Ci = c0i + ci ni , The total cost of cables then becomes:
k

i = 1, 2, . . . , k .

(8.5)

C (n1 , n2 , . . . , nk ) = C0 +
i=1

ci n i ,

(8.6)

where C0 is a constant. The total carried trac is a function of the number of channels: Y = f (n1 , n2 , . . . , nk ) . As we always operate with limited resources we will have: f = Di f > 0 . ni (8.8) (8.7)

226

CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS

In a pure loss system Di f corresponds to the improvement function, which is always positive for a nite number of channels because of the convexity of Erlangs Bformula. We want to minimize C for a given total carried trac Y : min{C} given Y = f (n1 , n2 , . . . , nk ) . (8.9)

By applying the Lagrange multiplier (shadow prices) , where we introduce G = C f , this is equivalent to: min {G (n1 , n2 , . . . , nk )} = min {C (n1 , n2 , . . . , nk ) [f (n1 , n2 , . . . , nk ) Y ]} A necessary condition for the minimum solution is: f G = ci = ci Di f = 0, ni ni or D2 f Dk f 1 D1 f = = = . = c1 c2 ck i = 1, 2, . . . , k , (8.11) (8.10)

(8.12)

A necessary condition for the optimal solution is thus that the marginal increase of the carried trac when increasing the number of channels (improvement function) divided by the cost for a channel must be identical for all trunk groups (4.49). It is possible by means of second order derivatives to set up a set of necessary conditions to establish sucient conditions, which is done in Moes Principle (Jensen, 1950 [58]). The improvement functions we deal with will always full these conditions. If we also have dierent incomes gi for the individual trunk groups (directions), then we have to include an additional weight factor, and in the results (8.12) we shall replace ci by ci /gi .

8.7.2

Optimum carried trac

Let us consider the case where the carried trac, which is a function of the number of channels (8.7) is Y . If we denote the revenue with R(Y ) and the costs with C(Y ) (8.6), then the prot becomes: P (Y ) = R(Y ) C(Y ) . (8.13) A necessary condition for optimal prot is: dP (Y ) =0 dY dR dC = , dY dY (8.14)

i.e. the marginal income should be equal to the marginal cost.

8.7. MOES PRINCIPLE Using:


k

227

P (n1 , n2 , . . . , nk ) = R (f (n1 , n2 , . . . , nk )) the optimal solution is obtained for: P dR = Di f ci = 0, ni dY which by using (8.12) gives:

C0 +
i=1

ci n i

(8.15)

i = 1, 2, . . . , k ,

(8.16)

dR = . (8.17) dY The factor given by (8.12) is the ratio between the cost of one channel and the trac which can be carried additionally if the link in extended by one channel. Thus we shall add channels to the link until the marginal income equals the marginal cost (4.51).
Example 8.7.1: Optimal capacity allocation We consider two links (trunk groups) where the oered trac is 3 erlang, respectively 15 erlang. The channels for the two systems have the same cost and there is a total of 25 channels available. How should we distribute the 25 channels among the two links? From (8.12) we notice that the improvement functions should have the same values for the two directions. Therefore we proceed using a table: A1 = 3 erlang n1 3 4 5 6 7 F1,n (A1 ) 0.4201 0.2882 0.1737 0.0909 0.0412 A2 = 15 erlang n2 17 18 19 20 21 F1,n (A2 ) 0.4048 0.3371 0.2715 0.2108 0.1573

For n1 = 5 and n2 = 20 we use all 25 channels. This results in a congestion of 11.0%, respectively 4.6%, i.e. higher congestion for the smaller trunk group. 2

Example 8.7.2: Triangle optimization This is a classical optimization of a triangle network using alternative trac routing (Fig. 8.1). From A to B we have a trac demand equal to A erlang. The trac is partly carried on the direct route (primary route) from A to B, partly on an alternative route (secondary route) A T B, where T is a transit exchange. There are no other routing possibilities. The cost of a direct connection is cd , and for a secondary connection ct . How much trac should be carried in each of the two directions? The route A T B already carries trac to and from other destinations, and we denote the marginal utilization for a channel

228

CHAPTER 8. DIMENSIONING OF TELECOM NETWORKS

on this route by a. We assume it is independent of the additional trac, which is blocked from A B. According to (8.12), the minimum conditions become: F1,n (A) a = . cd ct Here, n is the number of channels in the primary route. This means that the costs should be the same when we route an additional call via the direct route and via the alternative route. If one route were cheaper than the other, then we would route more trac in the cheaper direction. 2

As the trac values applied as basis for dimensioning are obtained by trac measurements they are encumbered with unreliability due to a limited sample, limited measuring period, measuring principle, etc. As shown in Chap. 13 the unreliability is approximately proportional to the measured trac volume. By measuring the same time period for all links we get the highest uncertainty for small links (trunk groups), which is partly compensated by the abovementioned overload sensitivity, which is smallest for small trunk groups. As a representative value we typically choose the measured mean value plus the standard deviation multiplied by a constant, e.g. 1.0. To make sure, it should further be emphasized that we dimension the network for the trac which shall be carried 12 years from now. The value used for dimensioning is thus additionally encumbered by a forecast uncertainty. We has not included the fact that part of the equipment may be out of operation because of technical errors. ITUT recommends that the trac is measured during all busy hours of the year, and that we choose n so that by using the mean value of the 30 largest, respectively the 5 largest observations, we get the following blocking probabilities: En A30 En A5 0.01 , 0.07 . (8.18)

The above service criteria can directly be applied to the individual trunk groups. In practise, we aim at a blocking probability from A-subscriber to B-subscriber which is the same for all types of calls. With stored program controlled exchanges the trend is a continuous supervision of the trac on all expensive and international routes. In conclusion, we may say that the trac value used for dimensioning is encumbered with uncertainty. In large trunk groups the application of a non-representative trac value may result in serious consequences for the grade-of-service level. During later years, there has been an increasing interest for adaptive trac controlled routing (trac network management), which can be introduce in stored program control digital systems. By this technology we may in principle choose the optimal strategy for trac routing during any trac scenario.

Chapter 9 Markovian queueing systems


In this chapter we consider trac to a system with n identical servers, full accessibility, and an a queue with an innite number of waiting positions. When all n servers are busy, an arriving customer joins the queue and waits until a server becomes idle. No customers can be in queue when a server is idle (full accessibility). We consider the same two trac models as in Chaps. 4 & 5. 1. Poisson arrival process (an innite number of sources) and exponentially distributed service times (PCT-I ). This is the most important queueing system, called Erlangs delay system. In this system the carried trac will be equal to the oered trac as no customers are blocked. The probability of delay, mean queue length, mean waiting time, carried trac per channel, and improvement functions will be dealt with in Sec. 9.2. In Sec. 9.3 Moes principle is applied for optimizing the system. The waiting time distribution is derived for the basic queueing discipline, FirstCome FirstServed (FCFS) in Sec. 9.4. In Sec. 9.5 we summarize the results for the important single-server system M/M/1. 2. A limited number of sources and exponentially distributed service times (PCT-II ). This is Palms machine repair model (the machine interference problem) which is dealt with in Sec. 9.6. This model is widely applied for dimensioning of computer systems, terminal systems, exible manufacturing system (FMS), etc. Palms machine repair model is optimized in Sec. 9.7. The waiting time distribution for Palms model with FCFS queueing discipline is derived in Sec. 9.8.

9.1

Erlangs delay system M/M/n

Let us consider a queueing system M/M/n with Poisson arrival process (M), exponential service times (M), n servers, and an innite number of waiting positions. The state of the system is dened as the total number of customers in the system (either being served or

230
..... ..... ........... ........... ..... ......... ..... ......... ..... ..... ... ... ... .... .... ........... ..... .... .... .... ........... ... . ......... .... . .... ....... . .. .... . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. ... .. .. .. ... .. . ..... ................ ........ ...... .................. ........ ... ... .... .... ... ...... ...... ... ... ..... . ............... ................. .............. ...........

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS


..... ........... ............. ................. ..... ......... ..... ... .... ... .... .... .... ........... ..... .... .... ........... .... . . ... ... .. . .. .. .. .. .. . . . . . . . . . . . . . . . . . .. . .. .. ... .. . .. .. . . .. . .... ........ .. ... ............. .... ... .... .... ... ... ...... ...... ... ... ................. ............... .............. ..............

..... ..... ........... ........... ............. ................. ..... ......... ..... ......... ..... ..... ... ... .... ... .... .... .... ........... ..... .... ........... ..... .... .... ........... ... . .... ........... ... . . .. .. ... ... .. ... . .. . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . .. .. .. ... . . .. . .. . .. .... . .. .... . .. . . ... .. ........ ................... ................... .. .. .. ... .. .... ... ...... ...... ... ...... ...... ... .... .... ... ... ... ...... ...... ............... .............. .............. ............. ............. ....

n+1

(i+1)

Figure 9.1: State transition diagram of the M/M/n delay system having n servers and an unlimited number of waiting positions. waiting in the queue). We are interested in the steady state probabilities of the system. By the procedure described in Sec. 4.4 we set up the state transition diagram shown in Fig. 9.1. Assuming statistical equilibrium, the cut equations become: p(0) = p(1) , p(1) = 2 p(2) , . . . . . . . . .

p(i) = (i+1) p(i+1) , . . . . . . . . . (9.1)

p(n1) = n p(n) , p(n) = n p(n+1) , . . . . . . . . .

p(n + j) = n p(n+j +1) . . . . . . . . . .

By normalization of the state probabilities we obtain p(0) : 1=


i=0

As A = / is the oered trac, we get: i p(0) A , 0 i n, i! p(i) = in Ai p(n) A = p(0) , i n. n n! n in p(i) ,

(9.2)

9.1. ERLANGS DELAY SYSTEM M/M/N 1 = p(0) 1 + A A2 An + + + 1 2! n! 1+ A A2 + 2 + ... n n .

231

The innermost brackets have a geometric progression with quotient A/n. Statistical equilibrium is only obtained for: A < n. Otherwise, the queue will continue to increase towards innity. We obtain: p(0) = 1
n1

(9.3)

i=0

Ai An n + i! n! n A

A < n,

(9.4)

and equations (9.2) and (9.4) yield the steady state probabilities p(i), i > 0. E2 (A) Probability of delay 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. .. . . . . . . . .. . . . .. .. . . . . . . . . . . . .. .. . . . . . . . . . . .. .. . . . . . . . . .. . . . .. .. . . . . . . . . .. .. . . . . . . . . . . . .. .. . . . . . . . .. . . .. .. . . . . . . . . . .. . . .. .. . . . . . . . .. .. . . . .. .. . . . . . . . . .. . .. . .. . . . . .. . . .. . .. .. . . . . . . . .. . .. . . .. . . . . .. . . .. .. . . .. . . .. . . . . . .. . . .. . .. .. . . . . .. .. . . .. .. .. . . . . . . .. ... . .. ... . .. . . .. . . . ... .. . . .. .. . . ... ... . . . .. .. . . .. ... .. . ... . . . . .. . .. ... . ... ... . . . . . .. ... .. .. . . ... . . . . . . .. . .. . ... ... . . ... ... . .. . . .. ... ... ... ... . . . .. . . .. . ... .... . ... .. . . .. ... ... .... . . ... . . .. ... .... . .. ... .. .... . . .. . . .. .... ... .... ..... . . ..... . . ... ... . ... ..... ..... ...... . . . . ... . . .. ... .... ... .... ...... ........ . . .... . .. .... ........ ........... ......... ............ ................................................................................................................................................. ........................................................................................... ................................................... .. ..... ... ...........

10

15

10

11

12 13 14 15 Oered trac A

Figure 9.2: Erlangs Cformula for the delay system M/M/n. The probability E2,n (A) for a positive waiting time is shown as a function of the oered trac A for dierent values of the number of servers n.

232

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

9.2

Trac characteristics of delay systems

For evaluation of the performance of the system, several characteristics have to be considered. They are expressed by the steady-state probabilities.

9.2.1

Erlangs C-formula

The stationary Poisson arrival process is independent of the state of the system, and therefore the probability that an arbitrary arriving customer has to wait in the queue is equal to the proportion of time all servers are occupied (PASTAproperty: Poisson Arrivals See Time Averages). The waiting time is a random variable denoted by W. For an arbitrary arriving customer we have: E2,n (A) = p{W > 0}

p(i) = p(i)

i=n i=0

i=n

p(i)

= p(n) Erlangs Cformula (1917):

n . nA

(9.5)

An n n! n A E2,n (A) = , A A2 An1 An n 1+ + + + + 1 2! (n 1)! n! n A

A < n.

(9.6)

This probability of delay depends only upon A = /, not upon the parameters and individually. The formula has several names: Erlangs Cformula, Erlangs second formula, or Erlangs formula for waiting time systems. It has various notations in literature: E2,n (A) = D = Dn (A) = p{W > 0} . As customers are either served immediately or put into queue, the probability that a customer is served immediately becomes: Sn = 1 E2,n (A) = p(0) + p(1) + . . . + p(n 1) .

9.2. TRAFFIC CHARACTERISTICS OF DELAY SYSTEMS

233

The carried trac Y equals the oered trac A, as no customers are rejected and the arrival process is a Poisson process:
n

=
i=1 n

i p(i) +

i=n+1

n p(i)
i=0

(9.7)

=
i=1

p(i1) + p(i1) = i=n+1

p(i) ,

= A.

Here we have exploited the cut balance equation between state [i 1] and state [i]. The queue length is a random variable L. The probability of having customers in queue at a random point of time is: p{L > 0} =
i=n+1

p(i) = p(n)

A n

A n

p{L > 0} = where we have used (9.5).

A A p(n) = E2,n (A) , nA n

(9.8)

9.2.2

Numerical evaluation

Erlangs C-formula (9.6) is similar to Erlangs B-formula (4.10) except for the factor n/(nA) in the last term. As we have very accurate recursive algorithm for numerical evaluation of Erlangs B-formula (4.29) we use the following relationship for obtaining numerical values of the C-formula: E2,n (A) = n E1,n (A) , n A (1 E1,n (A)) E1,n (A) , 1y A {1 En (A)} >0. n A<n (9.9)

where y is the carried trac per channel in the corresponding loss system (4.13): y= We notice that: E1,n (A) < E2,n (A) .

234
1.0 Utilization y

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS


E2 (A)

0.8

0.6

0.4

0.2

0.0

...................... ............ ........ ........................ ........................ ..................... .................... .................. ................. ............... ............... ............. ............. . ........... ........... . ......... ........... ......... ......... ........ ........ ........ ....... .... .. ....... ...... ...... . ... ...... ..... .. .. ..... ..... ..... ........... ........... .... .... .............. .............. .... .... ............. ............. ............ .... ............ .... .. .. ........... ........... ... .......... .......... ... .. .. ......... ... .......... ... ......... ........ ... ... ......... ........ ... ... .... ... ........ ....... ........ ... ....... .. ....... ............ ............ .. ....... ...... ........... ........... .. .. ...... ...... . ... .... ...... ........... ... . .......... ...... .......... ...... .. .. ......... ..... ......... ..... .. ..... .. ......... ..... ......... . ... ....... ..... ........ .. ..... .. ........ .. ..... ........ ....... ..... .... ....... ........... .. ....... ........... ..... .. ..... .. .... .......... .......... ....... .... .... .. ....... ......... ...... ......... .. .... .... ...... ...... ......... ......... .. .... ...... .... ........ .. ........ ...... . . ..... .... . ..... ........ .... .. .... ........ . ..... ..... ....... . .. . .... ....... ..... ..... . ....... . ... ... .. ... ....... ..... ....... ..... . ....... . ... ......... ... . ......... ....... ..... ...... ..... . ... ... ..... . ......... ... .... ...... .... ...... . . ......... ........ ... .... ...... ... . .... ...... . ... .. .. .... ..... .. ... ........ . . ... .... ...... . ........ ... .... ...... ........ . . . .. ... .... ...... . ...... ... ....... ........ . .... .... ..... ... . ..... ....... ....... ......... . ... ......... ... ... . ..... ..... ....... ....... ........ . ........ ... . ... .... .... ..... ..... .. .... . ... .... ....... ........ .. . .. .... . ...... ..... .... ...... ..... ........ ....... . .. . ... .... ...... ....... .. ...... ... .... ....... ..... ...... . . .. ... .... ...... ....... ... ........ . ...... .... ....... ........ .. . . .. .. . .. ... . . ... ..... ........ ...... ....... . ....... ........ ... . ... .... .. .... ..... .. ..... ..... .. . ...... ...... ........ . ... .... .... ... ..... . ..... .. ...... ........ ...... ....... .. . ... . .... ..... .... ... ...... ....... ..... ...... ....... . . .. ... .... ..... ....... ...... .. . ..... ... ...... .... ....... ....... ...... .. ... ... .. ... ... . ....... ...... ...... ........ ... . .. ... ... ..... ..... ... .. . ..... ..... ........ ...... ...... .. . ........ .. . .... . ...... .... .... ... .... ....... ...... . ...... ....... .. ... . ... .. ... ..... .... ..... ... .... ....... ..... . ...... ..... ...... ....... . ... ... .... ..... . ... .. ..... ... .... ....... ........ ...... ........ ....... ...... .. . .. ... .. .. .... ..... .. ... .... ..... .. ....... ..... . ........ ...... ..... . .. .. ... .... ..... . ...... ..... .. .. ..... ....... ... .... ....... ...... ..... . . . .. . .. .. .... ... ... .... ..... .... ... .... . . .. .. ... ..... .... .... ...... ....... .. .. ... .... ..... ...... .... ....... . . . . .. .. ... ... .... .. . ... .. ..... . ..... . . .... .. ... .... ...... ..... .... ...... ...... . . .. ...... ..... .... ... .. .... .... . ... ...... ...... ... .... ..... ...... ..... .... . .. .. ... .... . .. .. .... ... ... ...... . .... ...... ... ...... . ..... ..... . . ... .... . ... .... ..... .. ...... . ... .... ..... .. ...... .... . . .. . .. . .... ... ...... .... ..... .... ...... .. ..... ... . .... . . . .. .. ... .... ..... . . .... ...... .. ..... .. ... .... ...... .... . . . .. . .. .. ... .. .. . . ... ..... .... .... .. . .. . ..... .. ..... ..... .. .... . . .... ... . .. ..... ... . .... .. . ..... ..... . ... ..... .. ... . .. .... . ... .... ... .... .. . . ..... ... ..... ..... ... . .. . .... ... .. . . .... ... .... ..... ... . ..... . .... .. ... .... . .. .. . ... .... .. . ... .... ... .. . ..... ..... ... .... . . . .. . .. ... .... ... .... .... .. . ... .... .. ..... . ... .... . . .. . .. ... . . ... ... ... ... .... .. .... . ... .. . .... .... .. .. ... . . ... ... . .. ... . .... ... . ... ... . . ..... ... ... . .. .. . ... .... ... .. .... . .. .. . .. .... .... ... ... . . ... . . .... ... .... .... ... .. .... . .. .. . ... .. .. .. . . ... .... . ... .... ... .... .. .... ... . . . . . . .. .. ... ... .... .... . .. . .. . ... .... ... .... . . . .. . .. . . .. ... . ... .. ... ... . . . .. .. ... . .... . .. . .. ... ... .. . .... . . ... ... . . . .. ... ... .. .... . . . .. ... ... .... ... . .. . ... . ... ... ... . .. . . ... .. . .. ... ... .... . . . ... .. .. .... . . . .. .. ... .. . . . .. ... ... .... .. ... . ... . . . . . .. .. ... .. . ... ... .. . ... . ... .. ... .. . . . . .. .. .. . . . .. . ... . .. . .. . ... .. .. . . ... ... .. . .. ... ... ... . . . ... . .. ... . . ... .. .. .. . ... ... ... . . .. .. . ... . ... . . .. . .. ... .. . . . ... .. . ... .. . .. . ... . .. .. . . . .. ... . .. . . .. . ... ... ... . . . . . . .. .. .. ... .. . . .. .. . .. ... ..... ... . . . . . .. . . .. . . .. . . . .. . .. .. . . . .. .. .. ... . ... ..... . . . . .. . . . .. . . . . ... .. .. .. ... ..... . . .. . . .. .. . . . .. . .. ..... . . . . .. . . .. . . .. ... . .. .. ... . . . .. . . . . ... .. . . . .. . . . . .. .. .... . .. ... ... . .. ..... . .. .. . . . . . .. ... . . . . .. .. .. . .. . .. .. . .. .. . . .. . . . .. ... . . .. .. ... . . . .. .. ... . . . .. .. ... . . .. .. .. . . . . .. ... . . . . . .. .. .. . .. .... . . . . . . . . .. .. . . . .. . . . . .. .. ... ... .. .... . ... ... . . . .. .. .... . . . . . . . . . . .. . . . . . . . . . . . . . . . . ... .. . . . . . . . . .. ... . . . . . . . . . . . . . . . . . . .. .. .. ... ... . . . ... ... . . . . . . . .. . . . . . . . . .. . . . .. .. . . . . . . .. ... .. .. ... . . . . .. ... ... ... ... . . . . . . . . . . . .. . .. . . . . . . .. .. .. . . . . . . . . .. . . . . .. .. .. .. .. . . . . .. .. .. ... ... . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . .. .. .. . . . . . . .. .. .. ... .. ... . . . .. .. .. ... ... ... . . . . . . . . . . . . . . . .. .. .. .. .. . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . .. . . . . . . . ... . . . . . . . . . .. ...... ..... . . . .. .. ...... ...... . . ... . ... . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . .. .. . . . . . . . .. . . . . . . ... . .. . . . . .... ...... . . . . .... ....... . .... ........ .. . ................. .. . .................. ... ..... . . ......... . ...

0.5 0.2 0.1 0.05 0.02 0.01 0.005 0.002 0.001

12

16

20

24 28 Oered trac A

Figure 9.3: The average utilization per channel y for a xed probability of delay E2,n (A) as a function of the number of channels n.

For A n, we have E2,n (A) = 1 as all customers are delayed. By using the general approach described in Sec. 4.4.1 we observe from the denominator of (9.6) that the rst terms for state [0] to state [n 1] are the same as for Erlangs loss system. The last term which includes all classes from state [n] to is obtained from state [n-1] by multiplying by A n A = . n na nA So a direct recurrence is obtained by using the recursion for Erlang-B up to state [n 1] and

9.2. TRAFFIC CHARACTERISTICS OF DELAY SYSTEMS then nd E2,n (A) by the nal step: E2,n (A) =
A E1,n1 (A) nA A + nA E1,n1 (A)

235

E2,n (A) =

A E1,n1 (A) . n A(1 E1,n1 (A))

(9.10)

Thus we use the same recursion as for Erlang-B formula except for the last step. The two formu (9.9) and (9.10) are of course equivalent, but the last one requires one iteration less. Erlangs C-formula may in an elegant way be expressed by the B-formula as noticed by B. Sanders: 1 1 1 = . (9.11) E2,n (A) E1,n (A) E1,n1 (A) Erlangs C-formula has been tabulated in many books and tables, i.a. in Moes Principle (Jensen, 1950 [58]) and is shown in Fig. 9.2, Fig. 9.3, and Fig. 9.4. We notice that for a given value of E2,n (A), the utilization of each channel increases as number of channels n increases (economy of scale).

9.2.3

Mean queue lengths

We distinguish between the queue length at an arbitrary point of time and the queue length when there are customers waiting in the queue.

Mean queue length at a random point of time The queue length L at an arbitrary point of time is called the virtual queue length. This is the queue length experienced by an arbitrary customer as the PASTA-property is valid due to the Poisson arrival process (time average = call average). We obtain the mean queue length Ln = E{L} at an arbitrary point of time from the state probabilities:
n

Ln = 0

p(i) +
i=0

i=n+1

(i n) p(i) A n
in

i=n+1

(i n) p(n)

236

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS E2,n(A) Probability of delay 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Number of channels n

.. ... ..... . ..... . . .. .. . ... .... ........ . ......... ..... .... ... . .... ... .. ..... . .. . ... ... ..... ... ... ....... . . . .. ... .. . . . .. ... .. .. ... .. . . . . .. ... . . . . .. ... ... .. ... .. ... .. .. . . .. .. ... ... .. . .. . . .. . . .. ... .... .. .. .. .. ... ... . .. . . .. . . . .. .. ... .. .. .. .. .. ... . . . .. .. . . .. .. ... .. .. . . . ... ... ... . .. .. . . . . ... .. .. .. .. . . .. ... ... .. .. . . . .. .. .. .. . . . ... ... .. . .. .. . . . . .. ... .. .. .. . . .. ... . . . .. . . . .. .. .. ... .. .. . . . ... .. ... .. .. . . . . ... .. .. .. . . . ... ... ... . .. .. .. .. . . .. . . .. .. ... ... ... . . .. . .. . .. ... . . .. .. .. ... . . . . . .. . . .. .. .. ... . . .. ... .. .. . .. . . .. . .. . . .. .. ... . . . . .. ... .. .. ... . . . . ... .. .. . . . .. .. . . . .. ... .. ... .. . . . . . . . .. . . . .. ... .. ... .. . . . . . . .. .. ... . .. .. . . .. ... .. . . . . . . . . . .. .. ... .. .. .. ... . . . . . . . . . . . . . . .. .. ... .. . . . .. .. ... . . .. . . .. .. .. . . . . .. . . . . .. ... .. ... . . . .. . . . . . .. .. ... . . . . ... .. .. . . . . . . . . .. . . .. .. ... . . . .. .. ... . . . ... . .. ... . . . .. ... . .. .. . . .. . . . ... ... ... . . . . .. . .. .. . . .. ... .. ... . . . . . . . .. .. ... . .. .. . . ... ... . . . . . .. . . . . .. ... . .. . . .. .. ... ... . . . .. . . . . .. .. ... ... . . .. .. . .. . . . .. ... ... . . . . .. .. . . . .. .. .. ... ... . . . . . . .. .. ... ... . ... . . .. .. ... . . . . . . .. . . . . .. ... . ... . . .. .. ... .. . . . ... . . . . ... .. .. ... . . . .. ... . . ... . ... . . . . .. ... .. . . . .. ... ... ... . . . . . . . .. ... ... . .. .. ... . . ... . . . . . .. . .. . . . . .. ... . . . .. ... ... .. . ... . . ... . . . .. ... . .. .. . ... .. ... . . . ... ... . . . . .. . .. . . ... .. ... ... ... . . . . . . .. ... ... . . .. ... .. .. ... . . . . . . . . . .. .. ... ... .. .. ... ... . . .. . . . . . .. . . ... . .. .. ... ... . . . .. ... .. . . . .. .. ... . . . ... .. ... . . . .. ... .. . . .. . ... ... .. . . . .. . . ... . .. .. ... ... .. ... . . . . . . .. ... ... . .. ... . .. .. ... . .. . . . .. . . . .. ... ... .. . .. ... . ... .. .. . . ... ... . . ... .. .. ... .. . . .. .. .. ... ... ... . . ... .. . . . .. .. .. . . .. ... ... ... ... .. . . . . . ... .. .. ... .. ... . . .. ... .. . . . . . .. . . ... . .. ... ... . . .. .. ... ... ... .. .. . . ... . . . ... ... ... .. ... .. .. . . ... . ... . ... ... .. ... ... . . .. .. .. . . ... ... .. ... .. ... ... . . . . .. .. ... ... ... . .. .. ... ... . ... . . . . . . . . . . .. ... ... ... . ... ... ... .. .. .. .. . ... ... ... ... . . ... .. ... . .. .. .. .. ... . ... ... ... ... ... . . .. ... .. .. . ... ... ... ... ... ... ... . . . .. ... ... ... ... .. ... .. . ... ... ... . . . . . . . .. .. .. . .. ... ... ... ... . ... .. ... ... ... .. .. ... .. ... . ... .. ... ... ... ... .. .. .. .. ... ... ... ... ... ... ... . .. ... .. ... .. .. .. ... ... . .. ... ... .... ... . .. ... ... ... .... ... ... ... ... .. .. ... ... . . . . . .. .. .. .. .. .. .. ... ... .... .. ... .... ... ... ... .... .. ... .. .. ... ... ... ... ... .... .... .... .... ... .. .. .. ... ... ... .... ... ... .... .... .... ... .... .. ... ... ... .. ... . .. . .. .... ... ..... ... .... .... .. ... ... .... .... ..... .... .. .... ... ... .... .... ..... . . . .. . .. ... .. ... ... .. ... .... .... ... ... ... .... ... ..... ..... ..... ... ..... .... ... ..... ..... .... ... .... ...... ..... ...... ..... ... ... ...... ..... ... ..... ...... .... .... ..... .... ... ... ........ ...... ... .... ...... ........ .... ...... ........ ...... ...... ........ .......... .......... ... . ... . .......... ..... ......... ... . .......... ... .... ...... .. ......... . ........ ............ .................. ................... ... ...... ...... .................................................................................................................................................................................................................................... ......... ...................................................................................................................................................................................................................................... .. . . ..... .. ........... . . . .. .... . ..

10

20

50

100

0.0

0.1

0.2

0.3

0.4

0.5

0.6 0.7 0.8 0.9 1.0 Oered trac per channel A/n

Figure 9.4: Erlangs Cformula for the delay system M/M/n. The probability E2,n (A) for a positive waiting time is shown as a function of the oered trac A/n per channel for dierent values of the number of servers n. This gure is a re-scaling of Fig. 9.2. Ln = p(n)
i=1

A n

A = p(n) n

i=1

(A/n)

A n

As we have A/n c < 1, the series is uniformly convergent, and the dierentiation operator may be put outside the summation: Ln = p(n) = p(n) A n (A/n) A/n 1 (A/n) = p(n) A/n {1 (A/n)}2

n A , nA nA A . nA (9.12)

Ln = E2,n (A)

9.2. TRAFFIC CHARACTERISTICS OF DELAY SYSTEMS

237

The average queue length is the trac carried by the queueing positions and therefore it is also called the waiting time trac.

Mean queue length, given the queue is greater than zero The time average is also in this case equal to the call average. The conditional mean queue length becomes: Lnq =
i=n+1 (in) p(i) i=n+1 p(i)

p(n)

n nA

p(n)

A nA

A nA

n nA Ln , p{L > 0}

(9.13)

By applying (9.8) and (9.12), this is of course the same as: Lnq =

where L is the random variable for queue length.

9.2.4

Mean waiting times

Also here two items are of interest: the mean waiting time W for all customers, and the mean waiting time w for customers experiencing a positive waiting time. The rst one is an indicator for the service level of the whole system, whereas the second one is of importance for the customers, which are delayed. Time averages will be equal to call averages because of the PASTA-property.

Mean waiting time W for all customers Littles theorem tells that the average queue length is equal to the arrival intensity multiplied by the mean waiting time: Ln = Wn , (9.14) where Ln = Ln (A), and Wn = Wn (A). Inserting Ln from (9.12) we get: Wn = Ln 1 A = E2,n (A) . nA

238

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

As A = s, where s = 1/ is the mean service time, we get: Wn = E2,n (A) s . nA (9.15)

Mean waiting time w for delayed customers The total waiting time is constant and may either be averaged over all customers (Wn ) or only over customers, which experience a positive waiting time wn (2.28): Wn = wn E2,n (A) , wn = s . nA (9.16) (9.17)

Example 9.2.1: Mean waiting time w when A 0 Notice, that as A 0, we get wn = s/n (9.17). If a customer experiences waiting time (which seldom happens when A 0), then this customer will be the only one in the queue. The customer must wait until a server becomes idle. This happens after an exponentially distributed time interval with mean value s/n. So wn never becomes less than s/n. 2

9.2.5

Improvement functions for M/M/n

The marginal improvement when we add one server can be expressed in several ways: The decrease in the proportion of total trac (= the proportion of all customers) that experience delay is given by: F2,n (A) = A {E2,n (A) E2,n+1 (A)} . (9.18)

The decrease in mean queue length (trac carried by the waiting positions) becomes: FL,n (A) = Ln (A) Ln+1 (A) . The decrease in mean waiting time Wn (A) for all customers: FW,n (A) = Wn (A) Wn+1 (A) = 1 FL,n (A) , (9.20) (9.19)

where we have used Littles law (9.14). If we choose the mean service time as time unit, then = A. We consider Wn (A) below.

9.3. MOES PRINCIPLE FOR DELAY SYSTEMS

239

Both (9.18) and (9.19) are tabulated in Moes Principle (Jensen, 1950 [58]) and are simple to evaluate by a calculator or computer.

9.3

Moes principle for delay systems

Moe rst derived his principle for queueing systems. He studied the subscribers waiting times for an operator at the manual exchanges in Copenhagen Telephone Company. Let us consider k independent queueing systems. A customer being served by all k systems has the total average waiting time W = i Wi , where Wi is the mean waiting time of ith system, which has ni servers and is oered the trac Ai . The cost of a channel is ci , eventually plus a constant cost, which is included in the constant C0 below. Thus the total costs for channels becomes:
k

C = C0 +
i=1

ni ci .

If the waiting time also is considered as a cost, then the total costs to be minimized becomes f = f (n1 , n2 , . . . , nk ). This is to be minimized as a function of number of channels ni in the individual systems. The allocation of channels to the individual systems is determined by: min f n1 , n2 , . . . , nk = min C0 +
i

n i ci +

Wi W

(9.21)

where (theta) is Lagranges multiplier (shadow prices). As ni are integers, a necessary condition for minimum, which in this case can be shown also to be a sucient condition, becomes: 0 < f (n1 , n2 , . . . , ni 1, . . . , nk ) f (n1 , n2 , . . . , ni , . . . , nk ) , 0 f (n1 , n2 , . . . , ni , . . . , nk ) f (n1 , n2 , . . . , ni +1, . . . , nk ) , which corresponds to: Wni 1 (Ai ) Wni (Ai ) > Wni (Ai ) Wni +1 (Ai ) where Wni (Ai ) is given by (9.15). Expressed by the improvement function for the waiting time FW,n (A) (9.20) the optimal solution becomes: ci FW,ni 1 (A) > FW,ni (A) , i = 1, 2, . . . k . (9.24) ci , ci , (9.23) (9.22)

240

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

The function FW,n (A) is tabulated in Moes Principle (Jensen, 1950 [58]). Similar optimizations can be carried out for other improvement functions.
Example 9.3.1: Delay system We consider two dierent M/M/n queueing systems. The rst one has a mean service time of 100 s and the oered trac is 20 erlang. The cost-ratio c1 / is equal to 0.01. The second system has a mean service time equal to 10 s and the oered trac is 2 erlang. The cost ratio equals c2 / = 0.1. A table of the improvement function FW,n (A) gives: n1 = 32 channels and n2 = The mean waiting times are: W1 = 0.075 s. W2 = 0.199 s. This shows that a customer, who is served at both systems, experience a total mean waiting time equal to 0.274 s, and that the system with less channels contributes more to the mean waiting time. 2 5 channels.

The cost of waiting is related to the cost ratio. By investing one monetary unit more in the above system, we reduce the costs by the same amount independent of in which queueing system we increase the investment (capacity). We should go on investing more as long as we make prot. Moes investigations during 1920s showed that the mean waiting time for subscribers at small exchanges with few operators should be larger than the mean waiting time at larger exchanges with many operators.

9.4

Waiting time distribution for M/M/n, FCFS

Queueing systems, where the service discipline only depends upon the arrival times, all have the same mean waiting times. In this case the strategy has only inuence upon the distribution of waiting times among the individual customer. The derivation of the waiting time distribution is simple in the case of ordered queue, FCFS = FirstCome FirstServed. This discipline is also called FIFO, FirstIn FirstOut. Customers arriving rst to the system will be served rst, but if there are multiple servers they may not necessarily leave the server rst. So FIFO refers to the time for leaving the queue and initiating service. Let us consider an arbitrary customer. Upon arrival to the system, the customer is either served immediately or has to wait in the queue (9.6).

9.4. WAITING TIME DISTRIBUTION FOR M/M/N, FCFS

241

We now assume that the customer considered has to wait in the queue, i.e. the system may be in state [n + k], (k = 0, 1, 2, . . .), where k is the number of occupied waiting positions just before the arrival of the customer. Our customer has to wait until k + 1 customers have completed their service before an idle server becomes accessible. When all n servers are working, the system completes customers with a constant rate n , i.e. the departure process is a Poisson process with this intensity. We exploit the relationship between the number representation and the interval representation (3.4): The probability p{W t} = F (t) of experiencing a positive waiting time less than or equal to t is equal to the probability that in a Poisson arrival process with intensity (n ) at least (k+1) customers depart during the interval t (3.21): (nt)i nt e . F (t | k) = i! i=k+1

(9.25)

The above was based on the assumption that our customer has to wait in the queue. The conditional probability that our customer when arriving observes all n servers busy and k waiting customers (k = 0, 1, 2, ) is: p(n + k) =

pw (k) =

p(n) p(n)
i=0

A n

p(n + i) A n A n
k

i=0

A n

k = 0, 1, . . . .

(9.26)

This is a geometric distribution including the zero class (Tab. 3.1). The unconditional waiting time distribution then becomes: F (t) =
k=0 k=0

pw (k) F (t | k) , A 1 n
i=1

(9.27)
k

F (t) =

A n

i1

(nt)i nt e i! i=k+1 A n
k

= ent

(nt)i A 1 i! n k=0

as we may interchange the two summations when all terms are positive probabilities. The

242

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

inner summation is a geometric progression:


i1

k=0

A n

A n

A n

i1

k=0

A n

A 1 n A n

1 (A/n)i 1 1 (A/n)
i

= 1 Inserting this we obtain: F (t) = ent = ent


i=1 i=0

(nt)i i!

1
i=0

A n

(nt)i i!

(nt)i i! ,

A n

= ent ent ent A/n F (t) = 1 e(n A)t , F (t) = 1 e (n ) t ,

n > A,

t > 0.

(9.28)

i.e. an exponential distribution.

Apparently we have a paradox: when arriving at a system with all servers busy one may: 1. Count the number k of waiting customers ahead. The total waiting time will then be Erlang(k+1) distributed. 2. Close the eyes. Then the waiting time becomes exponentially distributed. The interpretation of this is that a weighted sum of Erlang distributions with geometrically distributed weight factors is equivalent to an exponential distribution. In Fig. 9.6 the phasediagram for (9.27) is shown, and we notice immediately that it can be reduced to a single exponential distribution (Sec. 2.3.3 & Fig. 2.12). Formula (9.28) conrms that the mean waiting time wn for customers who have to wait in the queue becomes as shown in (9.17). The waiting time distribution for all (an arbitrary customer) becomes (2.27): Fs (t) = 1 E2,n (A) e(nA) t , A < n, t 0, (9.29)

and the mean value of this distribution is Wn in agreement with (9.15). The results may be derived in an easier way by means of generation functions.

9.5. SINGLE SERVER QUEUEING SYSTEM M/M/1 Density function 1 .. 5 2 101 5 2 102 5 2 103 5 2 104 0 5 10 15 20 25

243

. . . . . . A= 8 . .. . .. .. n = 10 . .. ... .. .. ... ... .. .. .... .. . .. .......... .. ........... . .. ............. .. .... .......... .. .... ........ .. ..... ......... .. ..... ........ .. ..... ........ .. ... ........ .. .. .. ....... .. ... ....... .. ..... ....... .. ...... ...... . . . .... ...... . . . ... ...... ......... . . . . .............. . . . . ............. . . . ......... . . .............. .... .. ... .. .. . . . .. . .... ... .. .. .... . .. . . ... .. .. . ......... ...... . .. .. . . . ......... .. .. . .. .. .... ... . . . ...... . .. .. .. . .. . . . .. ... . . . .. .. .. .. . . .

FCFS

.. .. .. .. .... .. ... .. .. .... ... .. .. .... ... .. .. .... .. ... .. .. .... ... .. .. .... ... .. .. .... ... .. .. .... ... .. .. .... ... .. .. .... ... .. .. .... ... .. .. .... ... .. .. .... .. ... .. .. .. .. .. .. .. .. .. .

......

......

SIRO

......

. . . .LCFS .......

30

35

40

45

50 55 60 Waiting time

Figure 9.5: Density function for the waiting time distribution for the queueing discipline FCFS, LCFS, and SIRO (RANDOM). For all three cases the mean waiting time for delayed calls is 5 time-units. The form factor is 2 for FCFS, 3.33 for LCFS, and 10 for SIRO. The number of servers is 10 and the oered trac is 8 erlang. The mean service time is s = 10 time-units.

9.5

Single server queueing system M/M/1

This is the system appearing most often in the literature. The state probabilities (9.2) are given by a geometric series: p(i) = (1 A) Ai , i = 0, 1, 2, . . . , (9.30)

as p(0) = 1A. The mean value of state probabilities is m1 = A/(1 A). The probability of delay become: E2,1 (A) = A . The mean queue length Ln (9.12) and the mean waiting time for all customers Wn (9.15)

244
n

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS


A n A n A n A n

..... .. .................... .................. ..

.... .. ................ ............... .. ..

n .........................................

.... .. . .. .. .............................. ............................. . .. .............................. ............................. . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... . .. ... .. .. . . .. .. . . . . ....................................................................... ............................................................ ... . . . . ............ . . . ... .. .. . .. ...

1 A n

1 A n

... .. . .. .. ......................... ........................ .. . .............................. ............................. . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... . .. .. . .. .. . . .. . .. . . . ......................................................................... . ... ... ......................................................................... .. ... .. ....... ..... . .. . .. ... .

1 A n

1 A n

Figure 9.6: The waiting time distribution for M/M/nFCFS becomes exponentially distributed with intensity (n ). The phase-diagram to the left corresponds to a weighted sum of Erlang-k distributions (Sec. 2.3.3) as the termination rate out of all phases is n (1 A ) = n . n become: L1 = W1 = A2 , 1A As . 1A (9.31) (9.32)

From this we observe that an increase in the oered trac results in an increase of Ln by the third power, independent of whether the increase is due to an increased number of customers () or an increased service time (s). The mean waiting time Wn increases by the third power of s, but only by the second power of . The mean waiting time wn for delayed customers increases with the second power of s, and the rst power of . An increased load due to more customers is thus better than an increased load due to longer service times. Therefore, it is important that the service times of a system do not increase during overload.
    j j j j

   

.....

Figure 9.7: State transition diagram for M/M/1.

9.5.1

Sojourn time for a single server

When there is only one server, the state probabilities (9.2) are given by a geometric series (9.30) for all i 0. Every customer spends an exponentially distributed time interval with intensity in every state. A customer who nds the system in state [ i ] shall stay in

9.6. PALMS MACHINE REPAIR MODEL

245

the system an Erlang(i + 1) distributed time interval. Therefore, the sojourn time in the system (waiting time + service time), which also is called the response time, is exponentially distributed with intensity ( ) (cf. Fig. 2.12): F (t) = 1 e( )t , > , t 0. (9.33)

This is identical with the waiting time distribution of delayed customers. The mean sojourn time may be obtained directly using W1 from (9.32) and the mean service time s: m1 = W 1 + s = m1 = As + s, 1A (9.34)

s 1 = , 1A

where = 1/s is the service rate. We notice that mean sojourn time is equal to mean waiting time for delayed customers (9.17). The mean sojourn time is by Littles law also equal to the mean value of state probabilities divided by .

9.6

Palms machine repair model

This model belongs to the class of cyclic queueing systems and corresponds to a pure delay system with a limited number of customers (cf. Engset case for loss systems). The model was rst considered by the Russian Gnedenko in 1933 and published in 1934. It became widely known when C. Palm published a paper in 1947 [93] in connection with a theoretical analysis of manpower allocation for servicing automatic machines. A number of S machines, which usually run automatically, are serviced by n repairmen. The machines may break down and then they have to be serviced by a repairman before running again. The problem is to adjust the number of repairmen to the number of machines so that the total costs are minimized (or the prot optimized). The machines may be textile machines which stop when they run out of thread; the repairmen then have to replace the empty spool of a machine with a full one. This Machine-Repair model or Machine Interference model was also considered by Feller (1950 [32]). The model corresponds to a simple closed queueing network and is successfully applied to solve trac engineering problems in computer systems. By using Kendalls notation (Sec. 10.1) the queueing system is denoted by M/M/n/S/S, where S is the number of customers, and n is the number of servers. The model is widely applicable. In the Web, the machines correspond to clients whereas the repairmen correspond to servers. In computer terminal systems the machines correspond to terminals and a repairman corresponds to a computer managing the terminals. In a

246

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

computer system the machines may correspond to disc storages and the repairmen correspond to input/output (I/O) channels. In the following we will consider a computer terminal system as the background for the development of the theory.

9.6.1

Terminal systems

Time division is an aid in oering optimal service to a large group of customers using for example terminals connected to a mainframe computer. The individual user should feel that he is the only user of the computer (Fig. 9.8).
. ... ....................................................................................................................................................................................................................... . . . . . ........................................................................................................... ......................................................................................................... . . . . . . . . . . . . . . . . . . . ........... ........... . . .. . . .. . . ... .. . .. . . . . . . . . . . . . . . . ........ . . ................... . . ...... ........ ...... .................... . ....... . ...... . . . . . . .. . . . . . . . .. . .. . . . . .. . . .. . .. . .. . .. . . . ... . ... . .. . .. .. . .. .......... . .......... . . .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . . .. . .. . . . . .. . . .. .. . . . .. . . . . .. . . . . .. . .. . . . . . .. . . . .. . ..... . ..... ... . . . . . .. . . .... ...... . . .. .... .. . . . . . .. .. . . .. .. . .. .. . . . . . . . . . .. . . . . . .. . . . . . . ...... .. . . .................. . . . .. . ............. ............. . ........ . .. . .................... . . . . .. . . . . . .... .... . . . . .. . . . . . .. .. .... . .... . . .. . . . .. . .. . . .. . . .... .. .... ... . . ... . . . . . .. ... .... . ........... . .... ........ . . ........................... ........................... . . . . . .... .. .... .. ............. . ............. . . . . . . . . . . . . . . . . . . . . . . . .... .. .... . .. . . . . . . . . . .. . .. . . .. . . . . . . . . . . .... .. . . . . . . . . . .... . . . . . . . . . . . . . . . . . . . . . . . . .... .... . . . ....................................... . . . . . . .................... . . . . . . . . ............................. .. . . . ...................................... . . . . . . . .................. .. . .. . .. . . ........................... . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . . . . .. .. . . . . . . . . . .. .. . ... . ... . . . . . . . . . . . . . .. . . . . . . . ... . . .. ............ ......... . . ........................... ........................... . .. . . .. . . . . . .. . . .. . . . . . .. . .. . . . .. . . .. . . . . . . . .. . .. . . . .. . . .. . . . . . .. . . .. . . . . . . .. . .. . . . .. . . .. . . . . . . . .. . . .. . . .. .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... . .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ........ . ........... . .. .. .... . .. ... .. . .. .. . .. .. .. . . . . . . . . .. . ........ .................... . . .. . . . . . .................. ........... ........... . . .. . ........ . . . . . . .. .. .. .. ... ... ............ ...........

Queue

Computer

Terminals

Queueing system

Figure 9.8: Palms machine-repair model. A computer system with S terminals (an interactive system) corresponds to a waiting time system with a limited number of sources. The individual terminal all the time changes between two states (interact) (Fig. 9.9): the user is thinking (working), or the user is waiting for a response from the computer. The time interval the user is thinking is a random variable Tt with mean value mt . The time interval, when the user is waiting for the response from the computer, is called the response time R. This includes both the time interval Tw (mean value mw ), where the job is waiting for getting access to the computer, and the service time itself Ts (mean value ms ). Tt + R is called the circulation time (Fig. 9.9). At the end of this time interval the terminal returns to the same state as it left at the beginning of the interval (recurrent event). In the following we are mainly interested in mean values, and the derivations are valid for all work-conserving queueing disciplines (Sec. 10.6.2).

9.6. PALMS MACHINE REPAIR MODEL Terminal state


. . . .. .. . ... . ... ... . . . . . . ....................... . . ....................... ........................ ...................... . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... ... ...... ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... .... .... ... ... ... . . . . . . . . . . . .... ... . . . . .. .......................... .......................... ....................... ... .. .......................... . .. . . . . . . . . . . . . . . .... . . ......................................................................................................................................................................................... ....................................................................................................................................................................................... .... . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . ............. .............. . ........................... ........................... . ...................... .................. ... . ......... ........ .. . . .. . .. .. . .. t w s . . . . . . . . . . . . . . . . .

247

Circulation time

Service Waiting

Thinking

Time

Figure 9.9: The individual terminal may be in three dierent states. Either the user is working actively at the terminal (thinking), or he is waiting for response from the computer. The latter time interval (response time) is divided into two phases: a waiting phase and a service phase.

9.6.2

State probabilities single server

We consider now a system with S terminals, which are connected to one computer (n = 1). The thinking time for each thinking terminal is so far assumed to be exponentially distributed with intensity = 1/mt , and the service (execution) time at the computer is also assumed to be exponentially distributed with intensity = 1/ms . When there is queue at the computer, the terminals have to wait for service. Terminals being served or waiting in queue have arrival intensity zero. State [ i ] is dened as the state, where there are i terminals in the queueing system (Fig. 9.8), i.e. the computer is either idle (i = 0) or working (i > 0), and (i1) terminals are waiting when (i > 0). The queueing system can be modeled by a pure birth & death process, and the state transition diagram is shown in Fig. 9.10. Statistical equilibrium always exists (ergodic system). The arrival intensity decreases as the queue length increases and becomes zero when all terminals are inside the queueing system. The steady state probabilities are found by applying cut equations to Fig. 9.10 and expressing all states in terms of state S: (S i) p(i) = p(i + 1), i = 0, 1, . . . , S 1 . (9.35)

By the additional normalization constraint that the sum of all probabilities must be equal to

248 S (S 1)

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS


...... ............. .............. .............. .............. ............... ..... ........ .... ... ... ...... ...... ... ...... ...... .... .. ........ ..... .. .. ... . ... . . .... .... ........... ..... .... ...... ..... . ... ...... . . ............. ............... .... .. . ... . .. .. .. ... ... .. .. .. .. .. .. .. . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . .. .. .. . . . .. .. ... .. ... ... . .. .. .. . . . .... ... ... ................ ............... .................... .............. ...... .... ... ........... ..... ... ... .. ... ..... ... ........ ...... ..... ........ ..... ........ ................. ................. ............... ................ ..... ...... .

(S 2)

........... ............... ............... .................. .... ..... .. ....... ..... .. ....... .... .. ... .... ........... .... ........... ..... ... .. .... .. . .. .... ... ... .. ... ... . . .. .. .. .. .. .. .. .. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. . .. ... .. ... ... ... .. .. . . .. .... . .... ... ........... .... ... .......... ... ... ... .......... ..... ... .......... .. .. . ..... .. ................. ................. ................ .............. .

S 1

Figure 9.10: State transition diagram for the queueing system shown in 9.8. State [ i ] denotes the number of terminals being either served or waiting, i.e. S i denotes the number of terminals thinking. one we nd, introducing = /:
i

p(S i) =

i!

p(S)
i

i!
S j j=0

i = 0, 1, . . . , S ,

(9.36)

j! (9.37)

p(0) = E1,S ( ) . This is the truncated Poisson distribution (4.9).

We may interpret the system as follows. A trunk group with S trunks (the terminals) is oered calls from the computer with the exponentially distributed inter-arrival times (intensity ). When all S trunks are busy (thinking), the computer is idle and the arrival intensity is zero, but we might just as well assume it still generates calls with intensity which are lost or overow to another trunk group (the exponential distribution has no memory). The computer thus oers the trac = / to S trunks, and we have the formula (9.37). Erlangs B-formula is valid for arbitrary holding times (Sec. 4.6.2) and therefore we have: Theorem 9.1 The state probabilities of the machine repair model (9.36)(9.37) with one computer and S terminals is valid for arbitrary thinking time distributions when the service time of the computer are exponentially distributed. Only the mean thinking time is of importance. The ratio = / between the time a terminal on average is thinking 1/ and the time the computer on average serves a terminal 1/, is called the service ratio. The service ratio corresponds to the oered trac A in Erlangs B-formula. The state probabilities are thus determined by the number of terminals S and the service ratio . The numerical evaluation of (9.36) & (9.37) is of course as for Erlangs B-formula (4.29).
Example 9.6.1: Information system We consider an information system which is organized as follows. All information is kept on 6 discs

9.6. PALMS MACHINE REPAIR MODEL

249

which are connected to the same input/output data terminal, a multiplexer channel. The average seek time (positioning of the seek-arm) is 3 ms and the average latency time to locate the le is 1 ms, corresponding to a rotation time of 2 ms. The time required for reading a le is exponentially distributed with a mean value 0.8 ms. The disc storage is based on rotational positioning sensing, so that the channel is busy only during the reading. We want to nd the maximum capacity of the system (number of requests per second). We can never get a higher utilization for this system The thinking time is 4 ms and the service time is 0.8 ms. The service ratio thus becomes 5, and Erlangs B-formula gives the value: 1 p(0) = 1 E1,6 (5) = 0.8082 . This corresponds to max = 0.8082/0.0008 = 1010 requests per second. This utilization cannot be exceeded. 2

9.6.3

Terminal states and trac characteristics

The performance measures are easily obtained from the analogy with Erlangs classical loss system (9.37). Replacing p(0) by E1,S ( ) the computer is working with the probability {1 E1,S ( )}. We then have that the average number of terminals being served by the computer (utilization of computer) is given by: ns = 1 E1,S ( ) . (9.38)

The average number of thinking terminals corresponds to the trac carried in Erlangs loss system: nt = {1 E1,S ( )} = {1 E1,S ( )}. (9.39) The average number of waiting terminals becomes: nw = S ns nt = S {1 E1,S ( )} {1 E1,S ( )} = S {1 E1,S ( )}{1 + } . If we consider a random terminal at a random point of time, we get: p{terminal served} = ps = p{terminal thinking} = pt = p{terminal waiting} = pw = ns 1 E1,S ( ) = , S S nt = S (1 E1,S ( )) , S (9.42) (9.43) (9.44) (9.41) (9.40)

nw {1 E1,S ( )}{1 + } =1 . S S

250

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

We are also interested in the response time R which has the mean value mr = mw + ms . By applying Littles theorem L = W to terminals, waiting positions and computer, respectively, we obtain (denoting the circulation rate of jobs by ): mt mw ms mr 1 = = = = , nt nw ns nw + ns or mr = Making use of (9.45) and (9.38) nw + ns S nt ms = ms . ns ns nt mt = ns ms we get: (9.45)

mr = mr =

S ms mt ns S ms mt . 1 E1,S ( ) (9.46)

Thus the mean response time is insensitive to the time distributions as it is based on (9.38) and (9.45) (Littles Law). However, E1,S ( ) will depend on the types of distributions in the same way as the Erlang-B formula. If the service time of the computer is exponentially distributed (mean value ms = 1/), then E1,S ( ) will be given by (9.37). Fig. 9.11 shows the response time as a function of the number of terminals in this case. If all time intervals are constant, the computer may work all the time serving K terminals without any delay when: K = = mt + ms ms + 1. (9.47)

K is a suitable parameter to describe the point of saturation of the system. The average waiting time for an arbitrary terminal is obtained from (9.46): mw = mr ms

Example 9.6.2: Time sharing computer In a terminal system the computer sometimes becomes idle (waiting for terminals) and the terminals sometimes wait for the computer. Few terminals result in a low utilization of the computer, whereas many terminals connected will waste the time of the users. Fig. 9.12 shows the waiting time trac in erlang, both for the computer and for a single terminal. An appropriate weighting by costs and summation of the waiting times for both the computer and for all terminals gives the total costs of waiting.

9.6. PALMS MACHINE REPAIR MODEL 20 Mean response time [1 ]

251

15

10

. . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . .. . . . . . .. . . . . . . . . . .. . .. . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . ............................................................................................................................................................................................................................................................................. ................................................................................. ................................................................................................................................................................. ......................... . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . ... . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . .. . . . . . ... . . . . . . . . . .. . . . . ... . . . . .. . . . . .. . . . . . ... . . . . . .. . . . . .. . . . . . . . . ... . . . . .. . .. . . . . ... . . . . . . . . . . .. . . . . . ... . . . . . . . . . . . . . . . .. . . . ... . . . . . . . . . . . . . . . ... .. . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . ... . . . . . ... . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . .. .................................................................................................................................................................................................................................................. ......................................... . . . . ............................ ..................................................... ..................................................................................................................................................... ............................ . . . . . . . . . ... . . . . . . . . . . .... .... . . . . . . . . . . . . . . . .. . . . . . ... . . . . . . . . . . .. . . . . .... . . . . . . . . . . . ... . . . . ... . . . . . . . . . . . .. . . . . . .. .. . . . . . . . . . . . . . . . . . ... . . . . . . . . . . .. . .. .. . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . .. . .. .. . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . .. .. . . . . .. .. . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . .. . . . . . ... . . . . . . . ..................................................... .............................................................................................................. ......... ................................................................. . .. . ................................................................................................................................................................................................................................................... ............................ ............................ . . . . . . . . . . .. . .. . . . .... . . . . . . . . . . .. . . . . . .. . . . . . . . . . .. .... .. . . . .... . . . . . . . . . . . . ... . . . . ... . . . . . . . . . . . . . ... . . . . .. . ... .. . . . . . . . . . . . ... . . . . . ... . . . . . . . . . . . .. . ... ... . . . . .. . . . . . . . . . . . . . ... . . . . . ... . . . . . . . . . . . . . ... . . . . ... . . . . . . . . . .. . . . . . ... . . . . ... . . . . . .. . . . . . . . ... . . . . . . ... . . . . . . . . . . . . . .. . .... . . . . .... . . . . . . . . . . . . .... . . . . . . .... . .. . . . . . . . . . . . .. .... .... . . . . . . . . . . . . . . ....... . . . . ......... . . . . . .. . . . . . . . . . . .... . .. .... . .. .. . . . . . . . . . ...... . . . . ...... . . . . . . . . . ............................................................................................................................................................................................................................................................................... ................................................................................. ........................................................................................................................................................................................... . ... .... ... .. . . . . . . . . . . . . . . . . . . . ....... ....... ..... . . . . . . . .... ............. . . . .. . .................. . . . .. . . . . . . . . . . ......... . . . . ............ . . . . . . ........... ............. . . . . . . . . . . . .. ................. ................. . . . . . . . . . .. ........... .. .. .... .......... ..................................... .... .... .... .... .... .... .... ..... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .................................... ... ... ... ... ... ... ... . ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . .. ... ... ... ... ... ... ... .... ... ... ... ... . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

= 30

10

20

30

40 50 Number of terminals S

Figure 9.11: The actual average response time experienced by a terminal as a function of the number of terminals. The service-factor is = 30. The average response time converges to a straight line, cutting the x-axes in S = 30 terminals. The average virtual response time for a system with S terminals is equal to the actual average response time for a system with S + 1 terminals (the Arrival theorem, theorem 5.1).
For the example in Fig. 9.12 we obtain the minimum total delay costs for about 45 terminals when the cost of waiting for the computer is hundred times the cost of one terminal. At 31 terminals both the computer and each terminal spends 11.4 % of the time for waiting. If the cost ratio is 31, then 31 is the optimal number of terminals. However, there are several other factors to be taken into consideration. 2

Example 9.6.3: Trac congestion We may dene the trac congestion in the usual way (Sec. 1.9). The oered trac is the trac carried when there is no queue. The oered trac per source is (5.10): a= The carried trac per source is: y= ms = 1+ mt + ms ms . mt + mw + ms

252
The trac congestion becomes: C =

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

ay a mt + ms mw = , mt + mw + ms mt + mw + ms

= 1 C = pw

In this case with nite number of sources the trac congestion becomes equal to the proportion of time spent waiting. For Erlangs waiting time system the trac congestion is zero because all oered trac is carried. 2

Waiting time trac [erlang] 1.0


.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. ..... ..... .. ..... ..... .. .. ..... ..... .. ..... ..... .. ..... ..... .. .. ... .... .. . ... ..... .. .... .... .. .. .... .... .. .... .... .. .... .... .. . .. .. .... .... .. .. .... .... .. .. .... .... .. .... .... .. ... ... .. .. .. ... ... .. ... .. ... ... .. .. ... ... .. ... ... .. ... ... .. . .. ... ... .. .. ... ... .. ... .. ... ... .. ... .. ... .. ... . .. . ... .. ... .. ... ... .. .. ... ... .. .. ... ... .. .. ... ... . .. .. ... ... .. .. ... ... .. .. ... ... .. .. ... ... .. ... .. ... . . .. .. ... ... .. ... .. ... .. ... ... .. ... ...... ... ...... ... ... ... .. ... . .. .... ..... .... ... .... .... ... ... ..... ..... ... ... ..... ..... ... ... ...... ...... .... .... ... .... .... .... ....... ....... .... .... ..... ..... .......... .......... ...... ...... ...... ...... .......... ........... ............... .............. .......... ......... ................................................................................................. ............................................................................................... ............................ ............................

= 30

0.8

Computer

0.6

Per terminal

0.4

0.2

0.0

10

20

30

40

50 60 70 Number of terminals S

Figure 9.12: The waiting time trac (the proportion of time spend waiting) measured in erlang for the computer, respectively the terminals in an interactive queueing system (Service factor = 30).

9.6. PALMS MACHINE REPAIR MODEL

253

9.6.4

Machinerepair model with n servers

The above model is easily generalized to n computers. The transition diagram is shown in Fig. 9.13.
........... ............... ............... . ............... . .... . ... .. . ... ....... ...... ... .... .. . . ... .... .. ........ .... .. ... ..... . ............. ............... .. .. ... .. .. ... .. .. .. .. .. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. .. .. ... .. .. .. .... .. .. . ................ .. .................... ... ......... ..... ... ........... ..... . ... .... ..... .. ..... ..... ....... ............... ............... ........... ......

(S 1)

(S n+2) (S n+1)

..... ............ ............ ............... ............... ............... ............... . ................. ..... ......... .... .... ... ... ... ....... ...... ... ....... ...... .... .. . .. ........ ..... .. ... .... . ... .. ........ .... .... ........... ..... .... .. ........ .... .... . . .. ...... ... . . ..... ..... .... ... ... .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. .. . .. . . .. .. .. . .. .. ... .. .. . .. .. .. .. .. .. . .... .. .... . .. .................... .................... .................... . ... .... ... ........... ..... ... ........... ..... ... ........... ..... ... .. ... ..... ..... .. ..... ... ..... ....... ..... ....... ..... ....... ................. ................. ............... .......... ... .......... ..... ...... ...... .

(S n)

(S n1)

............... ................ . ... ....... .... .. . ... .... .. ........ ... .... ... ... .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . .. . .. .. ... .. . . .... . .... ... ........... ... .. ... .......... . .... ..... ....... ............... ......

n1

n+1

(n1)

Figure 9.13: State transition diagram for the machine-repair model with S terminals and n computers. The steady state probabilities become: p(i) = S i
i

p(0) , n
in

0 i n, p(n) , niS. (9.48)

where we have the normalization constraint:


S

(S n)! p(i) = (S i)!

p(i) = 1 .
i=0

(9.49)

We can show that the state probabilities are insensitive to the thinking time distribution as in the case with one computer (we get a state-dependent Poisson arrival process). An arbitrary terminal is at a random point of time in one of the three possible states: ps = p {the terminal is served by a computer}, pw = p {the terminal is waiting for service}, pt = p {the terminal is thinking}. We have: ps 1 = S
n S

i=0

i p(i) +

i=n+1

n p(i)

(9.50)

pt = ps

(9.51) (9.52)

pw = 1 ps pt .

254

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

The mean utilization of the computers becomes: ps ns S = . = n n The mean waiting time for a terminal becomes: pw 1 W = . ps

(9.53)

(9.54)

Sometimes pw is called the loss coecient of the terminals, and similarly (1 ) is called the loss coecient of the computers (Fig. 9.12).
Example 9.6.4: Numerical example of scale of economy The following numerical examples illustrate that we obtain the highest utilization for large values of n (and S). Let us consider a system with S/n = 30 and / = 30 for a increasing number of computers (in this case pt = ). n ps pw pt a W 1 1 0.0289 0.1036 0.8675 0.8675 3.5805 2 0.0300 0.0712 0.8989 0.8989 2.3754 4 0.0307 0.0477 0.9215 0.9215 1.5542 8 0.0313 0.0311 0.9377 0.9377 0.9945 16 0.0316 0.0195 0.9489 0.9489 0.6155 2

9.7

Optimizing the machine-repair model

In this section we optimise the machine/repair model in the same way as Palm did in 1947. We have noticed that the model for a single repair-man is identical with Erlangs loss system, which we optimized in Chap. 4. We will thus see that the same model can be optimized in several ways. We consider a terminal system with one computer and S terminals, and we want to nd an optimal value of S. We assume the following structure of costs: ct = cost per terminal per time unit a terminal is thinking, cw = cost per terminal per time unit a terminal is waiting, cs = cost per terminal per time unit a terminal is served, ca = cost of the computer per time unit.

9.7. OPTIMIZING THE MACHINE-REPAIR MODEL

255

The cost of the computer is supposed to be independent of the utilization and is split uniformly among all terminals. 30
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. ....... ....... .. .. ....... ....... ... ... ....... ....... ... ....... ....... ... ....... ....... ... ... ........ ........ ... ... ........ ........ .... ... ....... ........ .... .... ....... ....... .... .... ....... ....... ..... ..... ....... ........ ...... ...... .... ......... ........ ......... . ...... . ... ......................................... . ................................. ...... ... . .... . . .. .... . ... . . . ... . .... . . .. .. .. ... .. ... . .... . . . . . ... .. ... . .. .. ... .. ... . .... . . .. . ... . . ... . ... . . . ... .. ... . . . ... . .... . . .. ... . . ... . .... . .. ... . ... . .... . . . . . . . . . . . . . . . . .

Total costs C0 [100] . .

25

20

15

10

10

20

30

40 50 60 Number of terminals S

Figure 9.14: The machine/repair model. The total costs given in (9.58) are shown as a function of number of terminals for a service ratio = 25 and a cost ratio r = 1/25 (cf. Fig. 4.7).

The outcome (product) of the process is a certain thinking time at the terminals (production time). The total costs c0 per time unit a terminal is thinking (producing) becomes: p t c0 = p t ct + p s cs + p w cw + We want to minimize c0 . The service ratio ratio r = cw /ca , we get: c0 = ct + 1 ca . S (9.55)

= mt /ms is equal to pt /ps . Introducing the cost


1 S

p w cw + ps cs + pt pt 1 cs + ca

ca

= ct +

r pw + (1/S) , pt

(9.56)

256

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

which is to be minimized as a function of S. Only the last term depends on the number of terminals and we get:

min {c0 } = min


S S

r pw + (1/S) pt r (nw /S) + (1/S) nt /S r nw + 1 nt r [S {1 E1,S ( )} {1 + }] + 1 {1 E1,S ( )} rS+1 {1 E1,S ( )} +1+ 1 , (9.58) (9.57)

= min
S

= min
S

= min
S

= min
S

where E1,S ( ) is Erlangs B-formula (9.36). We notice that the minimum is independent of ct and cs , and that only the ratio r = cw /ca appears. The numerator corresponds to (4.47), whereas the denominator corresponds to the carried trac in the corresponding loss system. Thus we minimize the cost per carried erlang in the corresponding loss system. In Fig. 9.14 an example is shown. We notice that the result deviates from the result obtained by using Moes Principle for Erlangs loss system (Fig. 4.7), where we optimize the prot.

9.8

Waiting time distribution for M/M/n/S/SFCFS

We consider a nite-source system with S sources and n channels. Both thinking time and service time are assumed to be exponential distributed with rate , respectively . Due to the arrival theorem an arriving call observes the the state probabilities of a system with S 1 sources. We renumber the states so that the state is dened as number of thinking sources. We denote the state probabilities of a system with S 1 sources by: pS1 (i) , i = 0, 1, . . . , S 1 . (9.59)

9.8. WAITING TIME DISTRIBUTION FOR M/M/N/S/SFCFS


. . ... .. ..... ....... ....... ......... ....... ......... ..... ........ ... ... ....... ...... .... ... .......... .... .. . ... ... ............. .... ... . . . .... ...... .. ... .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. ... ... .. .... ....... . ... . .............. ................. .. .... ... ... ........ ..... ... . . ... ... ..... ...... ......... .............. ............. ............. ..

257
............. ............. ............. . ................ .... ... ....... ..... ... ....... .... .... .......... ..... .... .......... .. . . .. . . ... .... ... .. . ... .. .. .. . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. . ... .. ... .. .. .... ...... .... ..... .. . . . . . .. . .... .. . ... ..... . .... ... .... . . ..... ... ....... ...... ... ....... .. . ............. ............. ............. .............

(S n1) (S n) (S n+1) (S n+2) (S 1)


.. ..... .............. ............. . ............. ................ ............... . ................ ...... ......... ..... ........ ... ....... ...... .. ... ....... .. ... .... .... .......... ...... .... .......... ..... .... .... .......... .... .... .......... . .. . .... .... .. . .... ... .. . ... .. .. .. .. .. . . . .. . .. .. .. .. .. . .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . .. .. .. . . . .. .. ... .. . . .... . . . ... ... .... .... .... . ..... ..... .... ..... ..... . .... ....... .... .. . ... ... ....... ...... ... . ...... ...... ... . ...... ...... .. ..... .. . .. . ... ... ...... ...... ... ..... ....... ... ..... ...... . . ............. ............ ............ ...... ..... ........... ............ ........... ..... .....

............. ............. . ............. ............. ............. ................ ................ ................ .... ... ....... ..... ... ....... ..... .. ....... ..... .. .... .... .... .......... ..... .... .......... ..... .... .......... ..... .... .. . . .. . . .... . .... .... .... ... .. .. ... . . . .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . .. .. .. .. .. .. . .. .. ... .. .. .. .... . . .... .... ... ... ... ... .. ...... .... .. ... .......... .... . .... ... ... . ....... ..... ... ........ ...... ... ........ ..... .. . ... .. . ... ....... ...... ... ....... ...... ..... .. .. ............... .............. ............. ............. ............. ............. ............. ...........

(n1)

Sn1

S n

Sn+1

S 1

.. ..... ..... ............. . ...... ......... ..... ........ ..... ........ ... ... ....... ...... .... ..... .... ..... ............ ...... ....... .... .... . . . .... ..... .. ... .. . .. .. . . .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. . . .. .. ... .. .... ....... .. . ... . ..... ......... .... .... ... ............. ...... . .... ... ....... ...... ... .. .. ..... ... ............. ............. ...... ..... ............

(n1)

(n2)

Sn1

S n

Sn+1

(S n1) (S n) (S n+1) (S n+2) (S 1)


.......... ............... ..... ... .... .... .... ... ....... .... ... ... ... .. ... .. .. . .. .. . .. . . . . . . . . . . . . . . .. . .. .. .. .. . . .... . .... .. .......... ............. .. .... .. .. .. ............... ............

..... ............. . ..... ........ ... ....... .... .... .......... ... ... ... ... .. . .. .. .. .. . . . . . . . . . . . . . .. . .. ... .. ... .. .. . . .... ... ... ......... .. ... ......... .... ............... .............

S 1

............. ............. ............... ................ .... ... ........ ...... .. .... .... .......... ..... .... .. .. .... ... ............. ... . . . .... ....... .. .. ... .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. . .. . .. .. .. ... . .. .... ....... .. . .............. .. .. .. ........... .... .... ... ... .......... ..... . .. ... . .. ..... ....... ....... ... ..... ................ ............. ... .. .

Sn1

(S n1)

Figure 9.15: The upper part shows the state transition diagram for the machine-repair model with S terminals and n computers. The state of the system is dened as the number of thinking customers (cf. Fig. 9.13). The middle diagram shows the same model with S 1 sources, i.e. the states seen by an arriving customer according to the arrival theorem. The lower part shows a subset of the states from state [0] to state [S n] which corresponds to the diagram for an Erlang loss system with S n channels. The probability of delay pw , respectively the probability of immediate service px (pw +px = 1), becomes:
Sn1

pw =
i=0 S1

pS1 (i) .

(9.60)

px =
i=Sn

pS1 (i) .

(9.61)

We consider only delayed calls, i.e. an arriving call observes a system with S 1 sources and will be delayed if he observe one of the states {0, 1, . . . , Sn1} (9.60) where all servers are occupied. This part of the state transition diagram corresponds to an Erlang loss system with arrival rate n, service rate , i.e. an oered trac A = n/, and S n1 servers. These probabilities may be calculated accurately as described in Sec. 4.4. Thus these conditional state probabilities are given by the truncated Poisson distribution (4.9): Ai i! i = 0, 1, . . . , S 1n ,

pS1,w (i) =

where A = n/. The state probabilities (9.62) are a subset of (9.59). In state [0] no customers arrive as they all are waiting or being served. In state [1] all servers are busy and

A2 AS1n 1+A+ + ... + 2! (S 1n) !

(9.62)

258

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

S n 2 customers are waiting. Thus the waiting time will be Erlang-(S n 1) distributed. In state [S n1] all servers are busy but no one is waiting, so the waiting time becomes Erlang-1 distributed. In general in state i (0 i S 1 n) the waiting time becomes Erlang(S n i) distributed. The Erlang-k distribution with intensity n is (3.21):
t

Fk (t) =
x=0 j=k

(nx)k1 n enx dx (k 1)!

(nt)j nt e j!
k1

= 1

j=0

(nt)j nt e . j!

(9.63)

Thus for a given value t we can calculate the distribution function F (t) by calculating the rst k terms (0, 1, . . . , k 1) of a Poisson distribution with parameter nt. For small mean values this can be done directly. For large mean values this can be done in a numerical stable way as for example shown in Example 4.4.1. The mean value of this Erlang-k distribution is k/(n ). The compound waiting time distribution for delayed customers is obtained by summation over all states:
S1n

Fw (t) =
i=0

pS1,w (i) FSni (t) ,

(9.64)

where pi,w (t) is given by (9.62) and Fk (t) is given by (9.63). Both of these can be calculated accurately, and thus the waiting time distribution is obtained by a nite number of terms. The mean waiting time w for a delayed customer becomes:
Sn1

w =
i=0

pS1,w (i)

S ni n

Sn 1 n n (S n) Y , n

Sn1

i=0

pS1,w (i) i , (9.65)

w =

where Y is the trac carried in the above Erlang loss system (9.62) with S n1 servers. The mean waiting time for all customers then becomes: W = pw w , (9.66)

9.8. WAITING TIME DISTRIBUTION FOR M/M/N/S/SFCFS where pw is given above (9.60).

259

Example 9.8.1: Mean waiting times for (n, S, A) = (2, 60, 60) We consider a system with n = 2 servers, S = 60 sources, and A = 60 erlang. We choose mean service time as time unit, 1/ = 1. Thus 1/ = 30 [time units]. From (9.65) we get: w = = (60 2) 60 (1 E57 (60)) 2 (60 2) 60 (1 0.128376) , 2 [mean service times] .

w = 2.851280

An arriving customer is either delayed or served immediately. Above we considered states (0, 1, . . . , 57), where a customer is delayed. These state probabilities add to one, when p(57) = E57 (60) = 0.128376. We now nd states p(58) and p(59) expressed by state probability p(57): 60 p(58) = p(57) = 0.132803 , 58 30 = 0.067527 . 59 Thus the state probabilities now add to 1.200330, and the normalized probabilities of delay before service, respectively immediate service becomes: 1 pw = = 0.833105 , 1.200330 p(59) = p(58) 0.200330 = 0.166895 . 1.200330 The the mean waiting time for all customers then becomes (9.66): px = W = 2.375414 [time units] , which is in agreement with Example 9.6.4. The circulation time becomes tc = idle time + waiting time + service time = 30 + 2.375414 + 1 = 33.375414 [time units] , and the state probabilities (time averages) (ps , pw , pt ) becomes as given in Example 9.6.4. 2

Example 9.8.2: Mean waiting times for (n, S, A) = (2, 10, 10) We consider a system with n = 2 servers, S = 10 sources, and A = 10 erlang. We choose mean service time as time unit, 1/ = 1. Thus 1/ = 5 [time units]. From (9.65) we get: w = = (10 2) 10 (1 E7 (10)) 2 (10 2) 10 (1 0.409041) , 2 [mean service times] .

w = 1.045205

260

CHAPTER 9. MARKOVIAN QUEUEING SYSTEMS

An arriving customer is either delayed or served immediately. Above we considered states (0, 1, . . . , 7), where a customer is delayed. These state probabilities add to one when p(7) = E7 (10) = 0.409041. We now nd states p(8) and p(9) expressed by state probability p(7): p(8) = p(7) p(9) = p(8) 10 = 0.511301 , 8 5 = 0.284056 . 9

Thus the state probabilities now add to 1.795358, and the normalized probabilities of delay before service, respectively immediate service becomes: pw = 1 = 0.556992 , 1.795358 0.79536 = 0.443008 . 1.79536

px =

The the mean waiting time for all customers then becomes (9.66): W = 0.582171 The circulation time becomes tc = idle time + waiting time + service time = 5 + 0.582171 + 1 = 6.582171 [time units] . 2 [time units] .

2010-04-13

Chapter 10 Applied Queueing Theory


So far we have considered classical queueing systems, where all trac processes are birth and death processes. They play a key role in queueing theory. The theory of loss systems has been successfully applied for many years within the eld of telephony, whereas the theory of delay systems has been applied within the eld of data and computer systems. To nd an simple analytical solution we have to assume either a Poisson arrival process or exponentially distributed service times. In this chapter, we mainly focus on the single server queue. In Sec. 10.1 we introduce Kendalls notation for queueing systems, and describe queueing disciplines and priority systems. Sec. 10.2 mentions some general results and concepts as Littles law, work conservation, and load function. The important Pollaczek-Khintchine formula for M/G/1 is derived in Sec. 10.3, where we also list some results for busy period and moments of waiting time distributions. State probabilities for a nite buer systems are obtained by Keilsons formula from innite buer state probabilities. The rst paper on queueing theory was published by Erlang in 1908 and dealt with queueing systems with constant service time M/D/n. This is more complex than Markovian systems. In Sec. 10.4 we deal with this system in details and derive state probabilities and the waiting time distribution for FCFS expressed by state probabilities. A system with Erlang-k arrival process, constant service time and r servers is equivalent to a system with Poisson arrival process, constant service time, and k r servers. In Sec. 10.5 we consider single-server systems with exponential service times and general renewal arrival processes. Sec. 10.6 considers more classes of customers with dierent priorities and dierent service time distributions. In Sec. 10.6.1 parameters of individual arrival processes and the total arrival process are described. Kleinrocks conservation law is derived in Sec. 10.6.2. Mean waiting times assuming non-preemptive disciplines are derived in Sec. 10.6.6. As a special case we nd the mean waiting time for shortest job rst queueing discipline (Sec. 10.6.4). For preemptive-resume queueing discipline mean waiting times are derived in Sec. refsec:14.4.7. Finally, we consider round robin and processor sharing queueing disciplines in Sec. 10.7.

262

CHAPTER 10. APPLIED QUEUEING THEORY

10.1

Kendalls classication of queueing models

In this section we shall introduce a compact notations for queueing systems, called Kendalls notation.

10.1.1

Description of trac and structure

D.G. Kendall (1951 [69]) introduced the following notation for queueing models: A/B/n where A = arrival process, B = service time distribution, n = number of servers. For trac processes we use the following standard notations (cf. Sec. 2.4): M D Ek Hn Ph GI G Markov. Exponential time intervals (Poisson arrival process, exponentially distributed service times). Deterministic. Constant time intervals. Erlang-k distributed time intervals (E1 = M). Hyper-exponential of order n distributed time intervals. Cox-distributed time intervals. Phase-type distributed time intervals. General Independent time intervals, renewal arrival process. General. Arbitrary distribution of time intervals (may include correlation).

Cox

Example 10.1.1: Ordinary queueing models M/M/n : is a pure delay system with Poisson arrival process, exponentially distributed service times, and n servers. This is the classical Erlang delay system (Chap. 9). GI/G/1 : is a general delay system with only one server. 2

The above mentioned notation is widely used in the literature. For a complete specication of a queueing system more information is required: A/B/n/K/S/X where:

10.1. KENDALLS CLASSIFICATION OF QUEUEING MODELS

263

K S X

= = =

the total capacity of the system, or only the number of waiting positions, the population size (number of customers), queueing discipline (Sec. 10.1.2).

K = n corresponds to a loss system, which is often denoted as A/B/nLoss. A superscript b on A, respectively B, indicates group arrival (bulk arrival, batch arrival), respectively group service. Index c (clocked) may indicate that the system operates in discrete time. Full accessibility is usually assumed.

10.1.2

Queueing strategy: disciplines and organization

Customers waiting in a queue to be served can be selected for service according to many dierent principles. We rst consider the three classical queueing disciplines: FCFS: First Come First Served. It is also called an ordered queue, and this discipline is the standard discipline in real-life where customers are human beings. It is also denoted as FIFO: First In First Out. Note that FIFO refers to the queue only, not to the total system. If we have more than one server, then a customer with a short service time may overtake a customer with a long waiting time even if we have FIFO queue. LCFS: Last Come First Served. This corresponds to the stack principle. It is for instance used in storages, on shelves of shops etc. This discipline is also denoted as LIFO: Last In First Out. SIRO: Service In Random Order. All customers waiting in the queue have the same probability of being chosen for service. This is also called RANDOM or RS (Random Selection). The rst two disciplines only take arrival times into considerations, while the third does not consider any criteria at all and thus does not require any memory (contrary to the rst two). They can be implemented in simple technical systems. Within an electro-mechanical telephone exchange the queueing discipline SIRO was often used as it corresponds (almost) to sequential hunting without homing. The total waiting time for all customers and thus the mean waiting time is the same in the three above-mentioned disciplines The queueing discipline only decides how the waiting time is distributed among customers. In for example a stored-program-controlled system there may be more complicated queueing disciplines. In queueing theory we in general assume that the total oered trac is independent of the queueing discipline. We often try to reduce the total waiting time. This can be done by using the service time as criterion:

264

CHAPTER 10. APPLIED QUEUEING THEORY SJF: Shortest Job First (SJN = Shortest Job Next, SPF = Shortest Processing time First). This discipline assumes that we know the service time in advance and it minimizes the total waiting time for all customers.

The above mentioned disciplines take account of either the arrival times or the service times. A compromise between these disciplines is obtained by the following disciplines: RR: Round Robin. A customer served is given at most a xed service time (time slice or slot). If the service is not completed during this interval, the customer returns to the queue which is FCFS. When the time slice converges to zero we get: PS: Processor Sharing. All customers share the service capacity equally. FB: Foreground Background. This discipline attempts to implement SJF without knowing the service times in advance. The server will oer service to the customer who so far has received the least amount of service. When all customers have obtained the same amount of service, FB becomes identical with PS. The last mentioned disciplines are dynamic as the queueing disciplines depend on the amount of time spent in the queue.

10.1.3

Priority of customers

In real life customers are often divided into N priority classes, where a customer belonging to class p has higher priority than a customer belonging to class p+1. We distinguish between two types of priority: Non-preemptive = HOL: A new customer waits until a server becomes idle even if it is serving a customer with lower priority. Furthermore it also waits until all customers with higher priority and customers arriving earlier with same priority have been served). This discipline is also called HOL = Head-Of-the-Line. Preemptive: A customer being served having lower priority than a new arriving customer is interrupted. We distinguish between: Preemptive resume = PR: The service is continued from, where it was interrupted, Preemptive without re-sampling: The service restarts from the beginning with the same service time, and

10.2. GENERAL RESULTS IN THE QUEUEING THEORY Preemptive with re-sampling: The service starts again with a new service time.

265

The two latter disciplines are applied in for example manufacturing systems and reliability. Within a single class, we have the disciplines mentioned in Sec. 10.1.2. In queueing literature we meet many other strategies and symbols. GD denotes an arbitrary queueing discipline (general discipline). The behavior of customers is also subject to modeling: Balking refers to queueing systems, where customers with a queue-length dependent probability may give up joining the queue. Reneging or time-out refers to systems with impatient customers which abandon the queue without being served. Jockeying refers to the systems where the customers may jump from one (e.g. long) queue to another (e.g. shorter) queue to obtain faster service. Thus by combining all options there are many possible models. In this chapter we shall only deal with the most important ones. We mainly consider systems with one server.

Example 10.1.2: Stored Program Controlled (SPC) switching system In SPCsystems tasks of the processors may for example be divided into ten priority classes. The priority is updated for example every 5th millisecond. Error messages from a processor have the highest priority, whereas routine tasks of control have the lowest priority. Serving accepted calls has higher priority than detection of new call attempts. 2

10.2

General results in the queueing theory

As mentioned earlier there are many dierent queueing models, but unfortunately there are only few general results in the queueing theory. The literature is very extensive, because many special cases are important in practice. In this section we shall look at the most important general results. Littles theorem presented in Sec. 3.3 is the most general result which is valid for an arbitrary queueing system. The theorem is easy to apply and very useful in many cases. Classical queueing models play a key role in the queueing theory, because other systems often converge to these when the number of servers increases (Palms theorem 3.1 in Sec. 3.7). Systems that deviate most from the classical models are the systems with a single server. However, these systems are also the simplest to deal with.

266

CHAPTER 10. APPLIED QUEUEING THEORY

For waiting time systems we also distinguish between call averages and time averages. The virtual waiting time is the waiting time a customer experiences if the customer arrives at a random point of time (time average). The actual waiting time is the waiting time, the real customers experiences (call average). When we consider systems with FCFS queueing discipline and Poisson arrival processes, the virtual waiting time will be equal to the actual waiting time due to the PASTA property: time averages are equal to call averages). U(t) Load function

.. . . .. . . . .. . ... . . .. . ... . . .. .. . . .. . .. . .. . .. . . .. .. . . .. . .. . . .. .. . . .. . .. . .. . .. . . .. .. . . .. . .. . . .. .. . . .. . .. . .. .. .. . . . .. .. .. . . .. . .. . 7 .. .. . . . . .. . .. . .. . ... . ... . .. . . . . .. . .. . .. .. . ... . ... . . . .. .. . .. .. .. . . .. . . . . .. .. . . . . .. .. . . . . . . . .. .. .. .. . . . . . . .. .. . . . .. .. . . . . .. .. . . .. .. . 3 . . . . . .. .. . . .. .. . . . . .. .. . . . .. .. . . . .. . . . . . .. .. .. .. . . . . .... . . . .. .. . ... . . . .. .. . . . . .. . .. . .. . . .. .. . ... . . . . . .. . . .. .. .. . . .. .. . . . . . 6 .. .. .. . . .. . . . .. .. . . . . .. . . . .. .. . .. .. .. . . . . . . . .. . .. .. ... . . . .. .. . . . . .. . .. . .. . . . .. . . .. . . . . .. . .. .. . . . 2 . .. . .. . .. . . .. . . .. . . . . .. .. .. . . . . . .. .. . .... . .. . . . . . . .. . .. . .. . .... . . . . . . .. . . .. . ... .. . . . ... . .. . .. . . . . .. .. . . .. . . .. . ... . . .. .. .. . . . . . .. .. .. . . . .. .. . .. . . . ... .. .. . .. .. . . . . . . .. . . . .. . .. .. . .. . . . . ... .. .. . .. . . . . . .. .. . . .. . .. . .. . . . .. . ... . 5 .. . . . . .. .. . .. . . . .. . .. . . . .. .. .. . . . .. . .. .. . . . .. . . .. .. . . . .. . . .. . . .. .. . 1 . . . . .. . .. . . . .. . . .. . . . . 4 .. .. . . . . .. . . .. . . . .. . .. . . . . .. . .. . . . .. . . .. . . . . .. .. . . . . ............ . . ............ .

T1

T2

T3

T4 T5

T6

T7

Time

Figure 10.1: Load function U (t) for the single server queueing system GI/G/1.

10.2.1

Load function and work conservation

We introduce two concepts which are widely used in queueing theory. Work conserving A system is said to be work conserving if no servers are idle when there is at last one job waiting, and the service time are independent of the service disciplines. This will not always be fullled in real systems. If the server is a human being, the service rate will often increase with the length of the queue, but after some time the server may become exhausted and the service rate decreases. Load function. U (t) denotes the time, it will require to serve the customers, which are in the system at time t (Fig. 10.1). At a time of arrival U (t) increases with a jump equal to the service time of the arriving customer, and between arrivals U (t) decreases with

10.3. POLLACZEK-KHINTCHINES FORMULA FOR M/G/1

267

a slope depending on the number of working servers until 0, where it stays until next arrival time. The mean value of the load function is denoted by U = E{U (t)}. In a GI/G/1 queueing system U (t) will be independent of the queueing discipline, if it is work conserving. For FCFS queueing systems the waiting time is equal to the load function at the time of arrival. If we denote the inter-arrival time Ti+1 Ti by ai , then we have Lindleys equation: Ui+1 = max{0, Ui + si ai } , (10.1) where Ui is the value of the load function at time Ti .

10.3

Pollaczek-Khintchines formula for M/G/1

In general, the mean waiting time for M/G/1 is given by: Theorem 10.1 Pollaczek-Khintchines formula (193032): W = As , 2(1 A) V , 1A (10.2)

W = where

(10.3)

s = m2 . (10.4) 2 2 W is the mean waiting time for all customers, s is the mean service time, A is the oered trac, and is the form factor of the holding time distribution (2.13). V =A The more regular the service process is, the smaller the mean waiting time will be. The corresponding results for the arrival process is studied in Sec. 10.5. In real telephone trac the form factor will often be 4 6, in data trac 10 100. Formula (10.2) is one of the most important results in queueing theory, and we will study it carefully. As a special case we have earlier derived the mean waiting time for M/M/1, where = 2 (Sec. 9.2.4). Later we consider M/D/1 where = 1 (Sec. 10.4).

10.3.1

Derivation of Pollaczek-Khintchines formula

We consider the queueing system M/G/1 and we wish to nd the mean waiting time for an arbitrary customer. It is independent of the queueing discipline, and therefore we may in the

268

CHAPTER 10. APPLIED QUEUEING THEORY

following assume FCFS. Due to the Poisson arrival process (PASTAproperty) the actual waiting time of a customers is equal to the virtual waiting time. The mean waiting time W for an arbitrary customer can be split up into two parts: 1. The average time it takes for a customer under service to be completed. When the new customer we consider arrives at a random point of time, the residual mean service time given by (2.34): s m1,r = , 2 where s and have the same meaning as in (10.2). When the arrival process is a Poisson process, the probability of nding a customer being served is equal to A because for a single server system we always have p0 = 1 A (oered trac = carried trac).

The contribution to the mean waiting time from a customer under service therefore becomes: s V = (1 A) 0 + A 2 = m2 . 2

2. The waiting time due to customers waiting in the queue (FCFS) which already arrived. On the average the queue length is L. By Littles theorem we have L=W , where L is the average number of customers in the queue at an arbitrary point of time, is the arrival intensity, and W is the mean waiting time which we look for. For every customer in the queue we shall on an average wait s time units. The mean waiting time due to the customers in the queue therefore becomes: Ls=W s=AW . We thus have the total waiting time (10.3) & (10.5): W = V + AW , W = = V 1A As , 2(1 A)

which is Pollaczek-Khintchines formula (10.2). W is the mean waiting time for all customers, whereas the mean waiting time for delayed customers w becomes (A = D = the probability of delay) (2.28): W s w= = . (10.5) D 2(1 A)

10.3. POLLACZEK-KHINTCHINES FORMULA FOR M/G/1

269

The above-mentioned derivation is correct since the time average is equal to the call average when the arrival process is a Poisson process (PASTAproperty). It is interesting, because it shows how enters into the formula.

10.3.2

Busy period for M/G/1

A busy period of a queueing system is the time interval from the instant all servers become busy until a server becomes idle again. For M/G/1 it is easy to calculate the mean value of a busy period. At the instant the queueing system becomes empty, it has lost its memory due to the Poisson arrival process. These instants are regeneration points (equilibrium points), and next event occurs according to a Poisson process with intensity . We need only consider one cycle from the instant the server changes state from idle to busy till the next time it again changes state from idle to busy. This cycle includes a busy period of duration T1 and an idle period of duration T0 . Fig. 10.2 shows an example with constant service time. The proportion of time the system is busy then becomes: State Busy
. . .......... ......... ..

h ........................
. . . .. . . .. . . . . . . . . . . .. .. .. . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . .

Idle

. . . . . .. . .. . . . . . . . . . . . . . . . . .

Time

.. ........... .......... .

Arrivals

.. ..................................... .................................... .

T1 ...........................................................................................................T0 ................................

Figure 10.2: Example of a sequence of events for the system M/D/1 with busy period T1 and idle period T0 . mT1 mT1 = = A = s. mT0 +T1 mT0 + mT1 From mT0 = 1/, we get: mT1 = s . 1A

(10.6)

During a busy period at least one customer is served.

270

CHAPTER 10. APPLIED QUEUEING THEORY

10.3.3

Moments of M/G/1 waiting time distribution

If we only consider customers, which are delayed, we are able to nd the moments of the waiting time distribution for the classical queueing disciplines (Abate & Whitt, 1997 [1]). FCFS : Denoting the ith moment of the service time distribution by mi , we can nd the kth moment of the waiting time distribution by the following recursion formula, where the mean service time is chosen as time unit (m1 = s = 1): mk,F = A 1A
k

j=1

k mj+1 mkj,F , j j+1

m0,F = 1 .

(10.7)

LCFS : From the above moments mk,F of the FCFSwaiting time distribution we can nd the moments mk,L of the LCFSwaiting time distribution. The three rst moments become: m1,L = m1,F , m2,L = m2,F , 1A m3,L = m3,F + 3 m1,F m2,F . (1A)2 (10.8)

10.3.4

Limited queue length: M/G/1/k

In real systems the queue length, for example the size of a buer, will always be nite. Arriving customers are blocked when the buer is full. For example in the Internet, this strategy is applied in routers and is called the drop tail strategy. There exists a simple relation between the state probabilities p(i) (i = 0, 1, 2, . . .) of the innite system M/G/1 and the state probabilities pk (i), (i = 0, 1, 2, . . . , k) of M/G/1/k, where the total number of positions for customers is k, including the customer being served (Keilson, 1966 [67]): pk (i) = p(i) , (1 A Qk ) (1 A) Qk , (1 A Qk )
j=k

i = 0, 1, . . . , k1 ,

(10.9)

pk (k) =

(10.10)

where A < 1 is the oered trac, and: Qk = p(j) . (10.11)

There exists algorithms for calculating p(i) for arbitrary holding time distributions (M/G/1) based on imbedded Markov chain analysis (Kendall, 1953 [70]), where the same approach is used for (GI/M/1).

10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES

271

We notice that p(i) only exists for A < 1, but for a nite buer we also obtain statistical equilibrium for A > 1. In this case we cannot use the approach described in this section. For M/M/1/k we can use the nite state transition diagram, and for M/D/1/k we describe a simple approach in Sec. 10.4.8, which is applicable for general holding time distributions.

10.4

Queueing systems with constant holding times

In this section we focus upon the queueing system M/D/n, FCFS. Systems with constant service times have the particular property that the customers leave the servers in the same order in which they are accepted for service.

10.4.1

Historical remarks on M/D/n

The rst paper at all on queueing theory was published by Erlang (1909 [29]). He dealt with a system with Poisson arrival Process and constant service times. Intuitively, one would think that it is easier to deal with constant service times than with exponentially distributed service times, but this is denitely not the case. The exponential distribution is easy to deal with due to its lack of memory: the remaining life-time has the same distribution as the total life-time (Sec. 2.1.1), and therefore we can forget about the epoch (point of time) when the service time starts. Constant holding times require that we remember the exact starting time. Erlang was the rst to analyse M/D/n, FCFS (Brockmeyer & al., 1948 [12]): Erlang: Erlang: Erlang: 1909 n = 1 1917 n = 1, 2, 3 1920 n arbitrary errors for n > 1, without proof, explicit solutions for n = 1, 2, 3.

Erlang derived the waiting time distribution, but did not consider the state probabilities. Fry (1928 [35]) also dealt with M/D/1 and derived the state probabilities (Frys equations of state) by using Erlangs principle of statistical equilibrium, whereas Erlang himself applied more theoretical methods based on generating functions. Erlang did not derive state probabilities, med looked for the waiting time distribution. Crommelin (1932 [21], 1934 [22]), a British telephone engineer, presented a general solution to M/D/n. He generalized Frys equations of state to an arbitrary n and derived the waiting time distribution, now named Crommelins distribution. Pollaczek (1930-34) presented a very general time-dependent solution for arbitrary service time distributions. Under the assumption of statistical equilibrium he was able to obtain

272

CHAPTER 10. APPLIED QUEUEING THEORY

explicit solutions for exponentially distributed and constant service times. Also Khintchine (1932 [71]) dealt with M/D/n and derived the waiting time distribution.

10.4.2

State probabilities of M/D/1

Under the assumption of statistical equilibrium we now derive the state probabilities for M/D/1 in a simple way. The arrival intensity is denoted by and the constant holding time by h. As we consider a pure waiting time system with a single server we have: Oered trac = Carried trac = h < 1 , i.e. A = Y = h = 1 p(0) , as in every state except state zero the carried trac is equal to one erlang. To study this system, we consider two epochs (points of time) t and t + h at a distance of h. Every customer being served at epoch t (at most one) has left the server at epoch t + h. Customers arriving during the interval (t, t + h) are still in the system at epoch t + h (waiting or being served). The arrival process is a Poisson process. Hence we have a Poisson distributed number of arrivals in the time interval (t, t + h) of duration h: p(j, h) = p{j calls within h} = (h)j h e , j! j = 0, 1, 2 . . . . (10.13) (10.12)

The probability of being in a given state at epoch t + h is obtained from the state at epoch t by taking account of all arrivals and departures during (t, t + h). By looking at these epochs we obtain a Markov Chain embedded in the original trac process (Fig. 10.3). We obtain Frys equations of state for n = 1 (Fry, 1928 [35]):
i+1

pt+h (i) = {pt (0) + pt (1)} p(i, h) + Above (10.12 we found:

j=2

pt (j) p(ij +1, h) ,

i = 0, 1, . . . .

(10.14)

p(0) = 1 A , and under the assumption of statistical equilibrium pt (i) = pt+h (i), we nd by successively letting i = 0, 1 . . . : p(1) = (1 A) eA 1 , p(2) = (1 A) eA (1 + A) + e2A ,

10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES


State State Arrivals in (t, t+h)

273

i+2 i+1 i i i1 Time

3 2 1 0

t Arrival Departure Arrival

t+h

Figure 10.3: Illustration of Frys equations of state for the queueing system M/D/1. and in general:
i

p(i) = (1 A)

j=1

(1)ij ejA

(jA)ij (jA)ij1 + (i j)! (i j 1)!

i = 2, 3, . . .

(10.15)

The last term corresponding to j = i always equals eiA , as (1)! . In principle p(0) can also be obtained by requiring that all state probabilities must add to one, but this is not necessary in this case where we know p(0).

10.4.3

Mean waiting times and busy period of M/D/1

For a Poison arrival process the probability of delay D is equal to the probability of not being in state zero (PASTA property): D = A = 1 p(0) . (10.16)

W denotes the mean waiting time for all customers and w denotes the mean waiting time for customers experiencing a positive waiting time. We have for any queueing system (2.28): W . D W and w are easily obtained by using Pollaczek-Khintchines formula (10.2): w= W = Ah , 2(1 A) h . 2(1 A) (10.17)

(10.18)

w =

(10.19)

274

CHAPTER 10. APPLIED QUEUEING THEORY

The mean value of a busy period was obtained for M/G/1 in (10.6) and illustrated for constant service times in Fig. 10.2: h . (10.20) mT1 = 1A The mean waiting time for delayed customers are thus half the busy period. It looks like customers arrive at random during the busy period, but we know that are no customers arrive during the last service time of a busy period. The distribution of the number of customer arriving during a busy period can be shown to be given by a Borl distribution: e B(i) = (i A)i1 i A e , i! i = 1, 2, . . . (10.21)

10.4.4

Waiting time distribution: M/D/1, FCFS

This can be shown to be: p{W t} = 1 (1 )


j=1

{(j )}T +j (j ) e , (T + j)!

(10.22)

where h = 1 is chosen as time unit, t = T + , T is an integer, and 0 < 1. The graph of the waiting time distribution has an irregularity every time the waiting time exceeds an integral multiple of the constant holding time. An example is shown in Fig. 10.4. Formula (10.22) is not suitable for numerical evaluation. It can be shown (Iversen, 1982 [44]) that the waiting time can be written in a closed form, as given by Erlang in 1909:
T

p{W t} = (1 )

j=0

{(j t)}j (jt) e , j!

(10.23)

which is t for numerical evaluation for small waiting times. For larger waiting times we are usually only interested in integral values of t. It can be shown (Iversen, 1982 [44]) that for an integral value of t we have: p{W t} = p(0) + p(1) + + p(t) . (10.24)

The state probabilities p(i) are calculated accurately by using a recursive formula based on Frys equations of state (10.15): p(i + 1) = 1 p(0, h)
i

p(i) {p(0) + p(1)} p(i, h)

j=2

p(j) p(ij +1, h)

(10.25)

10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES 1.0 0.5 0.2 0.1 0.05 0.02 0.01 0.005 0.002 0.001 P(W>t) Complementary waiting time distribution A = 0.5
...... ..... .... .... . . ..... . ..... .. .. .. . ........ ......... ... ..... ... .... ... ...... ... ...... ... ..... ..... ... ..... ..... ... ... ..... ..... .. .. ..... ..... .. ..... ..... .. ..... ..... .. .. ..... ..... .. .. ..... ..... ... ..... ... ..... ..... ... ..... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ..... ..... ... ..... ..... ... ... ..... ..... .. .. ..... ..... ... .. ..... ..... .. .. ..... ..... ... ... ..... ..... ... ..... ... ..... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ..... ... ..... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ..... ..... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ..... ..... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... ... ..... ..... ... . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ..

275

M/M/1

M/D/1

t [s1 ]

Figure 10.4: The complementary waiting time distribution for all customers in the queueing system M/M/1 and M/D/1 for ordered queue (FCFS). Time unit = mean service time. We notice that the mean waiting time for M/D/1 is only half of that for M/M/1.

For non-integral waiting-times we are able to express the waiting time distribution in terms of integral waiting times. If we let h = 1, then by a Binomial expansion (10.23) may be written in powers of , where

t = T + ,

T integer, 0 < 1 .

We nd:
T

p{W T + } = e

j=0

( )j p{W T j} , j!

(10.26)

where p{W T j} is given by (10.24). The numerical evaluation is very accurate when using (10.24), (10.25) and (10.26).

276

CHAPTER 10. APPLIED QUEUEING THEORY

10.4.5

State probabilities: M/D/n

When setting up Frys equations of state (10.14) we obtain more combinations:


n n+i

pt+h (i) =
j=0

pt (j) p(i, h) +
j=n+1

pt (j) p(n + i j, h) .

(10.27)

On the assumption of statistical equilibrium (A < n) we can leave out of account the absolute points of time:
n n+i

p(i) =
j=0

p(j) p(i, h) +
j=n+1

p(j) p(n + i j, h),

i = 0, 1, . . .

(10.28)

The system of equations (10.28) can only be solved directly by substitution, if we know the rst n state probabilities {p(0), p(1), . . . , p(n1)}. In practice we may obtain numerical values by guessing an approximate set of values for {p(0), p(1), . . . , p(n1)}, then substitute these values in the recursion formula (10.28) and obtain new values. After a few approximations we obtain the exact values. The explicit mathematical solution is obtained by means of generating functions (The Erlang book, [12] pp. 7583).

10.4.6

Waiting time distribution: M/D/n, FCFS

The waiting time distribution is given by Crommelins distribution:


n1 i

p{W t} = 1 where A is the oered trac and

i=0 k=0

p(k)

j=1

{A(j )}(T +j+1)n1i , {(T + j + 1)n 1 i}! 0 < h.

(10.29)

t = T h + ,

(10.30)

Formula (10.29) can be written in a closed form in analogy with (10.23):


n1 i T

p{W t} =

p(k)
i=0 k=0 j=0

{A(j t)}jn+n1i A(jt) e . {j n + n 1 i}!

(10.31)

For integral values of the waiting time t we have:


n(t+1)1

p{W t} =

p(j) .
j=0

(10.32)

10.4. QUEUEING SYSTEMS WITH CONSTANT HOLDING TIMES

277

For non-integral waiting times t = T + , T integer, 0 < 1, we are able to express the waiting time distribution in terms of integral waiting times as for M/D/1:
k

p{W t} = p{W T + } = e

j=0

( )j j!

kj

p(i)
i=0

(10.33)

where k = n(T + 1)1 and p(i) is the state probability (10.28). The exact mean waiting time of all customers W is dicult to derive. An approximation was given by Molina: n+1 1 A n h n W E2,n (A) (10.34) n . n+1 nA 1 A n For any queueing system with innite queue we have (2.28): w= where for all values of n: D =1 W , D
n1

p(j) .
j=0

10.4.7

Erlang-k arrival process: Ek /D/r

Let us consider a queueing system with n = r k servers (r, k integers), general arrival process GI, constant service time and ordered (FCFS) queueing discipline. Customers arriving during idle periods choose servers in cyclic order 1, 2, . . . , n 1, n, 1, 2, . . . Then a certain server will serve just every n th customers as the customers due to the constant service time depart from the servers in the same order as they arrive at the servers. No customer can overtake another customer. A group of r servers made up from the servers: x, x + k, x + 2 k, . . . , x + (r 1) k , 0 < x k. (10.35)

will serve just every k th customer. If we consider the servers (10.35), then considered as a single group they are equivalent to the queueing system GI k/D/r, where the arrival process GI k is a convolution of the arrival time distribution by itself k times. The same goes for the k1 other systems. The trac in these k systems is mutually correlated, but if we only consider one system at a time, then this is a GI k/D/n, FCFS queueing system.

278

CHAPTER 10. APPLIED QUEUEING THEORY

The assumption about cyclic hunting of the servers is not necessary within the individual systems (10.35). State probabilities and mean waiting times are independent of the queueing discipline, which is of importance for the waiting time distribution only. If we let the arrival process GI be a Poisson process, then GI k becomes an Erlang-k arrival process. We thus nd that the following systems are equivalent with respect to the waiting time distribution: M/D/rk, FCFS Ek /D/r, FCFS .

Ek /D/r may therefore be dealt with by tables for M/D/n.


Example 10.4.1: Regular arrival processes In general we know that for a given trac per server the mean waiting time decreases when the number of servers increases (economy of scale, convexity). For the same reason the mean waiting time decreases when the arrival process becomes more regular. This is seen directly from the above decomposition, where the arrival process for Ek /D/r becomes more regular for increasing k (r constant). For A = 0.9 erlang per server (L = mean queue length) we nd: E4 /E1 /2: E4 /E2 /2: E4 /E3 /2: E4 /D/2: L = 4.5174 , L = 2.6607 , L = 2.0493 , L = 0.8100 . 2

10.4.8

Finite queue system: M/D/1/k

In real systems we always have a nite queue. In computer systems the size of the storage is nite and in ATM systems we have nite buers. The same goes for waiting positions in FMS (Flexible Manufacturing Systems). As mentioned in Sec. 10.3.4 the state probabilities pk (i) of the nite buer system are obtained from the state probabilities p(i) of the innite buer system by using (10.9) & (10.10). Integral waiting times are obtained from the state probabilities, and non-integral waiting times from integral waiting times as shown above (Sec. 10.4.4). For the innite buer system the state probabilities only exist when the oered trac is less than the capacity (A < n). But for a nite buer system the state probabilities also exist for A > n, but we cannot obtain them by the above-mentioned method. For M/D/1/k the nite buer state probabilities pk (i) can be obtained for any oered trac in the following way. In a system with one server and (k1) queueing positions we have (k+1) states (0, 1, , k). Frys balance equations for state probabilities pk (i), i = 0, 1, . . . , k2,

10.5. SINGLE SERVER QUEUEING SYSTEM: GI/G/1

279

yielding k 1 linear equations between the states {pk (0), pk (1), . . . , pk (k 1)}. But it is not possible to write down simple time-independent equations for state k1 and k. However, the rst (k 1) equations (10.14) together with the normalization requirement
k

pk (j) = 1
j=0

(10.36)

and the fact that the oered trac equals the carried trac plus the rejected trac (PASTA property): A = 1 pk (0) + A pk (k) (10.37) results in (k + 1) independent linear equations, which are easy to solve numerically. The two approaches yields of course the same result. The rst method is only valid for A < 1, whereas the second is valid for any oered trac.

Example 10.4.2: Leaky Bucket Leaky Bucket is a mechanism for control of cell (packet) arrival processes from a user (source) in an ATM system. The mechanism corresponds to a queueing system with constant service time (cell size) and a nite buer. If the arrival process is a Poisson process, then we have an M/D/1/k system. The size of the leak corresponds to the long-term average acceptable arrival intensity, whereas the size of the bucket describes the excess (burst) allowed. The mechanism operates as a virtual queueing system, where the cells either are accepted immediately or are rejected according to the value of a counter which is the integral value of the load function (Fig. 10.1). In a contract between the user and the network an agreement is made on the size of the leak and the size of the bucket. On this basis the network is able to guarantee a certain grade-of-service. 2

10.5

Single server queueing system: GI/G/1

In Sec. 10.3 we showed that the mean waiting time for all customers in queueing system M/G/1 is given by Pollaczek-Khintchines formula: W = As 2(1 A) (10.38)

where is the form factor of the holding time distribution. We have earlier analyzed the following cases: M/M/1 (Sec. 9.2.4): = 2: W = As , (1 A) Erlang 1917. (10.39)

280 M/D/1 (Sec. 10.4.3): = 1: W =

CHAPTER 10. APPLIED QUEUEING THEORY

As , 2(1 A)

Erlang 1909.

(10.40)

It shows that the more regular the holding time distribution, the less becomes the waiting time trac. (For loss systems with limited accessibility it is the opposite way: the bigger form factor, the less congestion). In systems with non-Poisson arrivals, moments of higher order will also inuence the mean waiting time.

10.5.1

General results

We have till now assumed that the arrival process is a Poisson process. For other arrival processes it is seldom possible to nd an exact expression for the mean waiting time except in the case where the holding times are exponentially distributed. In general we may require, that either the arrival process or the service process should be Markovian. Till now there is no general accurate formulae for e.g. M/G/n. For GI/G/1 it is possible to give theoretical upper limits for the mean waiting time. Denoting the variance of the inter-arrival times by va and the variance of the holding time distribution by vd , Kingmans inequality (1961) gives an upper limit for the mean waiting time: GI/G/1: W As 2(1 A) va + vd s2 . (10.41)

This formula shows that it is the stochastic variations, that results in waiting times. Formula (10.41) gives the upper theoretical boundary. A realistic estimate of the actual mean waiting time is obtained by Marchals approximation (Marchal, 1976 [86]): W As 2(1 A) va + vd s2 s2 + vd a2 + v d . (10.42)

where a is the mean inter-arrival time (A = s/a). The approximation is a scaling of Kingmans inequality so it agrees with the Pollaczek-Khintchines formula for the case M/G/1.

10.5.2

State probabilities: GI/M/1

As an example of a non-Poisson arrival process we shall analyse the queueing system GI/M/1, where the distribution of the inter-arrival times is a general distribution given by the density function f (t). Service times are exponentially distributed with rate .

10.5. SINGLE SERVER QUEUEING SYSTEM: GI/G/1

281

If the system is considered at an arbitrary point of time, then the state probabilities will not be described by a Markov process, because the probability of an arrival will depend on the time interval since the last arrival. The PASTA property is not valid. However, if the system is considered immediately before (or after) an arrival epoch, then there will be independence in the trac process since the inter-arrival times are stochastic independent the holding times are exponentially distributed. The arrival epochs are equilibrium points (regeneration points, Sec. 3.2.2), and we consider the so-called embedded Markov chain. The probability that we immediately before an arrival epoch observe the system in state j is denoted by (j). In statistical equilibrium it can be shown that we will have the following result (D.G. Kendall, 1953 [70]): (i) = (1 )i ,
0

i = 0, 1, 2, . . .

(10.43)

where is the positive real root satisfying the equation: = e(1)t f (t) dt . (10.44)

The steady state probabilities can be obtained by considering two successive arrival epochs t1 and t2 (similar to Frys state equations, Sec. 10.4.5). As the departure process is a Poisson process with the constant intensity when there are customers in the system, then the probability p(j) that j customers complete service between two arrival epochs can be expressed by the number of events in a Poisson process during a stochastic interval (the inter-arrival time). We can set up the following state equations: t2 (0) =
j=0 j=0 j

t1 (j)

p(i)
i=0

t2 (1) = . . . t2 (i) = . . .

t1 (j) p(j) ,

(10.45)

j=0

t1 (j) p(j i+1) .

The normalization condition is as usual:


i=0

t1 (i) =

j=0

t2 (j) = 1 .

(10.46)

282

CHAPTER 10. APPLIED QUEUEING THEORY

It can be shown that the above-mentioned geometric distribution is the only solution to this system of equations (Kendall, 1953 [70]). In principle, the queueing system GI/M/n can be solved in the same way. The state probability p(j) becomes more complicated since the departure rate depends on the number of busy channels. Notice that (i) is not the probability of nding the system in state i at an arbitrary point of time (time average), but the probability of nding the system in state i immediately before an arrival (call average).

10.5.3

Characteristics of GI/M/1

The probability of immediate service becomes: p{immediate} = (0) = 1 . The corresponding probability of being delayed the becomes: D = p{delay} = . (10.48) (10.47)

The average number of busy servers at a random point of time (time average) is equal to the carried trac (= the oered trac A < 1). The average number of waiting customers, immediately before the arrival of a customer, is obtained via the state probabilities: L1 =
i=1

(1 ) i (i 1) , (10.49)

L1 =

2 . 1
i=0

The average number of customers in the system before an arrival epoch is: L2 = (1 ) i i (10.50)

. 1 1 . 1

The average waiting time for all customers then becomes: W = (10.51)

10.6. PRIORITY QUEUEING SYSTEMS: M/G/1

283

The average queue length taken over the whole time axis (the virtual queue length) therefore becomes (Littles theorem): . (10.52) L=A 1 The mean waiting time for customers, who experience a positive waiting times, becomes w = w = W , D 1 1 . 1 (10.53)

Example 10.5.1: Mean waiting times GI/M/1 For M/M/1 we nd = m = A. For D/M/1 = d is obtained from the equation: d = e (1 d ) /A , where d must be within (0,1). It can be shown that 0 < d < m < 1 . Thus the queueing system D/M/1 will always have less mean waiting time than M/M/1. For A = 0.5 erlang we nd the following mean waiting times for all customers (10.51): M/M/1: D/M/1: = 0.5 , = 0.2032 , W = 1, W = 0.2550 , w = 2. w = 1.3423 .

where the mean holding time is used as the time unit ( = 1). The mean waiting time is thus far from proportional with the form factor of the distribution of the inter-arrival time. 2

10.5.4

Waiting time distribution: GI/M/1, FCFS

When a customer arrives at the queueing system, the number of customers in the system is geometric distributed, and the customer therefore, under the assumption that he gets a positive waiting time, has to wait a geometrically distributed number of exponential phases. This will result in an exponentially distributed waiting time with a parameter given in (10.53), when the queueing discipline is FCFS (Sec. 9.4 and Fig. 2.12).

10.6

Priority queueing systems: M/G/1

The time period a customer is waiting usually means an inconvenience or expense to the customer. By dierent strategies for organizing the queue, the waiting times can be distributed among the customers according to our preferences.

284

CHAPTER 10. APPLIED QUEUEING THEORY

10.6.1

Combination of several classes of customers

The customers are divided into N classes (trac streams). Customers of class i are assumed to arrive according to a Poisson process with intensity i [customers per time unit] and the mean service time is si [time units]. The oered trac is Ai = i si . The second moment of the service time distribution is denoted by m2i . In stead of considering the individual arrival processes, we may consider the total arrival process, which also is a Poisson arrival process with intensity:
N

=
i=1

i .

(10.54)

The resulting service time distribution then becomes a weighted sum of service time distributions of the individual classes (Sec. 2.3.2: combination in parallel). The total mean service time becomes (2.59): N i si , (10.55) s= i=1 and the total second moment is (2.58):
N

m2 =
i=1

i m2i .
N

(10.56)

The total oered trac becomes:


N

A=
i=1

Ai =
i=1

i si = s .

(10.57)

The remaining mean service time at a random point of time becomes (10.4): V1,N = = 1 m2 2 1 m2 A 2 s 1 A 2
N N i i=1 m2i N i i=1 si

(10.58)

1 A 2

N i=1 i N i=1

Ai

m2i

V1,N =
i=1 N

1 i m2i 2

(10.59)

V1,N =
i=1

Vi ,

(10.60)

where index (1, N ) indicates that we include all streams from 1 to N .

10.6. PRIORITY QUEUEING SYSTEMS: M/G/1

285

10.6.2

Kleinrocks conservation law

We now consider a system with several classes of customers. We assume that the queueing discipline is independent of the service time. This excludes preemptive resume queueing discipline as the probability of preemption increases with the service time. The waiting time is composed of a contribution V from the remaining service time of a customer being served, if any, and a contribution from customers waiting in the queue. The mean waiting time becomes:
N

W = V1,N +
i=1

L i si .

Li is the average queue length for customers of type i. By applying Littles law we get:
N

W = V1,N +
i=1 N

i Wi si

W = V1,N +
i=1

Ai Wi .

(10.61)

We may also combine all customer classes into one and apply Pollaczek-Khintchines formula to get the same mean waiting time (10.5): W = V1,N + AW , (10.62) Under these general assumptions we get Kleinrocks conservation law (Kleinrock, 1964 [73]):

Theorem 10.2 Kleinrocks conservation law:


N

i=1

Ai Wi = A W =

A V1,N 1A

constant.

(10.63)

The average waiting time for all classes weighted by the trac (load) of the mentioned class, is independent of the queue discipline. For the total trac process we have Pollaczek-Khintchines formula. We may thus give a small proportion of the trac a very low mean waiting time, without increasing the average waiting time of the remaining customers very much. By various strategies we may allocate waiting times to individual customers according to our preferences.

10.6.3

Non-preemptive queueing discipline

In the following we look at M/G/1 priority queueing systems, where customers are divided into N priority classes so that a customer with the priority p has higher priority than customers with priority p+1. In a non-preemptive system a service in progress is not interrupted.

286

CHAPTER 10. APPLIED QUEUEING THEORY

The customers in class p are assumed to have the mean service time sp and the arrival intensity p . In Sec. 10.6.1 we derived parameters for the total trac process. The total average waiting time Wp of a class p customers is made up of the following three contributions: a) The residual service time V1,N for the customer under service. b) The waiting time, due to the customers in the queue with priority p or higher, which already are in the queues (Littles theorem):
p

i=1

si (i Wi ) .

c) The waiting time due to customers with higher priority, which overtake the customer we consider while this is waiting:
p1 p1

i=1

si L i =

i=1

s i i W p .

In total we get:
p p1

Wp = V1,N +
i=1

si i Wi +

i=1

si i Wp .

(10.64)

For customers of class one, highest priority, we get under the assumption of FCFS: W1 = V1,N + L1 s1 = V1,N + A1 W1 , W1 = V1,N . 1 A1 (10.66) (10.65)

V1,N is the residual service time for the customer being served when the customer we consider arrives (10.59):
N

V =
i=1

i m2i , 2

(10.67)

where m2i is the second moment of the service time distribution of the ith class. For class two customers we nd (10.64): W2 = V1,N + L1 s1 + L2 s2 + s1 1 W2 .

10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 Inserting W1 (10.65), we get: W2 = W1 + A2 W2 + A1 W2 , W2 = W2 = W1 , 1 A1 A2 V1,N . {1 A1 } {1 (A1 + A2 )} V1,N , {1 A0,p1 } {1 A0,p }
p

287

(10.68) (10.69)

In general we nd (Cobham, 1954 [15]): Wp = where: A0,p =


i=0

(10.70)

Ai ,

A0 = 0 .

(10.71)

The structure of formula (10.70) can be interpreted directly. All customers wait until the service in progress is completed {V1,N } no matter which class they belong to,. Furthermore, the waiting time is due to already arrived customers of at least have the same priority {A0,p }, and customers with higher priority arriving during the waiting time {A0,p1 }.
Example 10.6.1: SPC-system We consider a computer which serves two types of customers. The rst type has the constant service time of 0.1 second, and the arrival intensity is 1 customer/second. The other type has the exponentially distributed service time with the mean value of 1.6 second and the arrival intensity is 0.5 customer/second. The load from the two types customers is then A1 = 0.1 erlang, respectively A2 = 0.8 erlang. From (10.67) we nd: 0.5 1 2 (1.6)2 = 1.2850 s . V = (0.1)2 + 2 2 Without any priority the mean waiting time becomes by using Pollaczek-Khintchines formula (10.2): W = By non-preemptive priority we nd: Type one highest priority: W1 = W2 = 1.285 = 1.43 s , 1 0.1 W1 = 14.28 s . 1 (A1 + A2 ) 1.2850 = 12.85 s . 1 (0.8 + 0.1)

288
Type two highest priority:

CHAPTER 10. APPLIED QUEUEING THEORY

W2 = 6.43 s , W1 = 64.25 s . This shows that we can upgrade type one customers almost without inuencing type two. However the inverse is not the case. The constant in the Conservation law (10.63) becomes the same without priority (Pollaczek-Khintchine formula) as with nonpreemptive priority: 0.9 12.85 = 0.1 1.43 + 0.8 14.28 = 0.8 6.43 + 0.1 64.25 = 11.57 . 2

10.6.4

SJF-queueing discipline: M/G/1

By the SJF-queueing discipline the shorter the service time of a customer is, the higher is the priority. The SJF discipline results in the lowest possible total waiting time. By introducing an innite number of priority classes, (0, t), (t, 2t), (2t, 3t), . . . we obtain from the formula (10.70) that a customer with the service time t has the mean waiting time Wt (Phipps 1956): V0, Wt = , (10.72) (1 A0,t )2 where A0,t is load from the customers with service time less than or equal to t. When t is small A0,t A0,t+t .

If these dierent priority classes have dierent costs per time unit when they wait, so that class j customers have the mean service time sj and pay cj per time unit when they wait, then the optimal strategy (minimum cost) is to assign priorities 1, 2, . . . according to increasing ratio sj /cj .
Example 10.6.2: M/M/1 with SJF queue discipline We consider exponentially distributed holding times with the mean value 1/ which are chosen as time unit (M/M/1). Even though there are few very long service times, then they contribute signicantly to the total trac (Fig. 2.3). The contribution to the total trac A from the customers with service time t is obtained from (2.31) multiplied by A = : A0,t = A 1 et (t + 1) . Inserting this in (10.72) we nd Wt as illustrated in Fig. 10.5, where the FCFSstrategy (same mean waiting time as LCFS and SIRO) is shown for comparison as function of the actual holding time.

10.6. PRIORITY QUEUEING SYSTEMS: M/G/1

289

100 90 80 70 60 50 40 30 20 10 0
........................................ ....................................................... ...... ............................ ....................... ............ .......... ........ ....... ...... ..... ..... ..... . .. .... .... ... .... ... ... ... ... ... ... .. ... ... .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. . .. . .. . .. .. .. . .. . .. . .. . .. . . . . .. . .. . .. . . . . .. . . . . .. . . . . . . . .. . . . . .. . .. . .. . .. . . . .. .. . . . . .. . .. . .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .......................................................................................................................................................................................................................................................... ......................................................................................................................................................................................................................................................... . . . .. ... ... ... ... . . .... .... .... .... ..... ..... ........ ........ .................... ...................

Figure 10.5: The mean waiting time Wt is a function of the actual service time in a M/M/1 system for SJF and FCFS disciplines, respectively. The oered trac is 0.9 erlang and the mean service time is chosen as time unit. Notice that for SJF the minimum average waiting time is 0.9 time units, because an eventual job being served must rst be nished. The maximum mean waiting time is 90 time units. In comparison with FCFS, by using SJF 93.6 % of the jobs get shorter mean waiting time. This corresponds to jobs with a service time less than 2.747 mean service times (time units). The oered trac may be greater than one erlang, but then only the shorter jobs get a nite waiting time.

W [s

] Mean Waiting time

A=0.9

SJF

FCFS

10 11 12 13 14 Actual Service Time t [s 1 ]

290

CHAPTER 10. APPLIED QUEUEING THEORY

The mean waiting time for all customers is less for SJF than for FCFS, but this is not obvious from the gure. The mean waiting time for SJF becomes:
0 0 0

WSJF = =

Wt f (t) dt V0, (1 A0,t )2 f (t) dt

A et dt {1 A (1 et (t + 1))}2 2

which it is not elementary to calculate.

10.6.5

M/M/n with non-preemptive priority

We may generalize the above to Erlangs classical waiting time system M/M/n with non preemptive queueing disciplines, when all classes of customers have the same exponentially distributed service time distribution with mean value s = 1 . Denoting the arrival intensity for class i by i , we have the mean waiting time Wp for class p:
p p1

Wp = V1,N +
i=1

s Li + Wp n
p

i=1

s i , n
p1

Wp

s = E2,n (A) + n

i=1

s i Wi + Wp n

i=1

s i . n

A is the total oered trac for all classes. The probability E2,n (A) for waiting time is given by Erlangs C-formula, and when all servers are busy customers are terminated with the mean inter-departure time s/n. For highest priority class p = 1 we nd: s 1 + A1 W1 , n n s . n A1 (10.73)

W1 = E2,n (A)

W1 = E2,n (A)

10.6. PRIORITY QUEUEING SYSTEMS: M/G/1 For p = 2 we nd in a similar way: W2 = E2,n (A) = W1 + W2 = s 1 1 s + A1 W1 + A2 W2 + W2 1 n n n n

291

1 1 A2 W2 + A1 W2 , n n (10.74)

n s E2,n (A) . {n A1 } {n (A1 + A2 )}

In general we nd (Cobham, 1954 [15]): Wp = n s E2,n (A) . {n A0,p1 } {n A0,p } (10.75)

10.6.6

Preemptive-resume queueing discipline

We now assume that a customer being served is interrupted by the arrival of a customer with higher priority. Later on, the service continues from where it was interrupted. This situation is typical for computer systems. For a customer with priority p, the customers with lower priority do no exist. The mean waiting time Wp for a customer in class p consists of two contributions. a) Waiting time due to customers with higher or same priority, who are already in the queueing system. This is the waiting time experienced by a customer in a system without priority where only the rst p classes exists: V1,p , 1 A0,p
p

where V1,p =
i=1

i m2,i , 2

(10.76)

is the expected remaining service time due to customers with higher or same priority, and A0,p is given by (10.71). b) Waiting time due to the customers with higher priority who arrive during the waiting time or service time and interrupt the customer considered:
p1

(Wp + sp )
i=1

si i = (Wp + sp ) A0,p1 .

We thus get: Wp = V1,p + (Wp + sp ) A0,p1 . 1 A0,p

292 This can be rewritten as follows:

CHAPTER 10. APPLIED QUEUEING THEORY

Wp (1 A0,p1 ) = resulting in: Wp =

V1,p + sp A0,p1 , {1 A0,p }

V1,p A0,p1 + sp . (1 A0,p1 ) (1 A0,p ) 1 A0,p1

(10.77)

For highest priority customers we get Pollaczek-Khintchines formula for this class alone, as they are not disturbed by lower priorities (V1,1 = V1 ): W1 = The total response time becomes: Tp = Wp + sp . In a similar way as in Sec. 10.6.4 we may write down the formula for average waiting time for SJFqueueing discipline with preemptive resume. V1 . 1 A1 (10.78)

Example 10.6.3: SPCsystem (example 10.6.1 continued) We now assume the computer system is working with the discipline preemptive-resume and nd: Type one highest priority: W1 = W2 = Type two highest priority: W2 = W1 =
1 2 1 2

(0.1)2 = 0.0056 s , 1 0.1

1.2850 0.1 + 1.6 = 14.46 s . (1 0.1)(1 0.9) 1 0.1

0.5 2 (1.6)2 + 0 = 6.40 s , 1 0.8

1.2850 0.8 + 0.1 = 64.65 s . (1 0.8)(1 0.9) 1 0.8

This shows that by upgrading type one to the highest priority, we can give these customers a very short waiting time, without disturbing type two customers, but the inverse is not the case. The conservation law is only valid for preemptive queueing systems if the preempted service times are exponentially distributed. In the case with general service time distribution (G) a job may be preempted several times, and therefore the remaining service time will not be given by V . 2

10.7. FAIR QUEUEING: ROUND ROBIN, PROCESSOR-SHARING

293

10.6.7

M/M/n with preemptive-resume priority

For M/M/n the case of preemptive resume is more dicult to deal with. All customers must have the same mean service time. Mean waiting time can be obtained by rst considering class one alone (9.15), then consider class one and two together, which implies the waiting time for class two, etc. The conservation law is valid when all customers have the same exponentially distributed service time.

10.7

Fair Queueing: Round Robin, Processor-Sharing

The Round Robin (RR) queueing model (Fig. 10.6) is a model for a time-sharing computer system, where we want a fast response time for short jobs. This queueing discipline is also called fair queueing because the available resources are equally distributed among the jobs (customers) in the system.
. ...................................................................................................................................... . ...................................................................................................................................... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . .. . . . . . . . . . . .. . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... . . . . ........ ..... .... ..... . . . . .. .. . . .. .. ............................ . . . . .......................... . . . .. . . . . . . ........................................................ .. . .. .. . .................................................. ... ............................ ........................... . . . . . .. . . . . . . . . ............................................. ............................................. . .. .. .. .. ... ... ... ... ......... ........

Non-completed jobs

1p

New jobs

Queue

CPU

Completed jobs

Figure 10.6: Round robin queueing system. A task is allocated a time slice s (at most) every time it is served. If the task is not nished during this time slice, it is returned to a FCFS queue, where it waits on equal terms with new tasks. If we let s decrease to zero we obtain the PS (Processor Sharing) queueing discipline. New jobs are placed in a FCFSqueue, where they wait until they obtain service limited to one time slice (slot) s which is the same for all jobs. If a job is not completed within a time slice, the service is interrupted, and the job is placed at the end of the FCFSqueue. This continues until the required total service time is obtained. We assume that the queue is unlimited, and that new jobs arrive according to a Poisson process (arrival rate ). The service time distribution can be a general distribution with mean value s. The size of the time slice can vary. If it becomes innite, all jobs will be completed the rst time, and we have an M/G/1 queueing system with FCFS discipline. If we let the time slice

294

CHAPTER 10. APPLIED QUEUEING THEORY

decrease to zero, then we get the PS = Processor-Sharing model, which has a number of important analytical properties. The Processor-Sharing model can be interpreted as a queueing system where all jobs are served continuously by the server (time sharing). If there are x jobs in the system, each of them obtain the fraction 1/x of the capacity of the computer. So there is no real queue, as all jobs are served all the same, eventually at a lower rate. In next chapter we deal with processor sharing systems in more detail. The state transition diagrams are identical for the classical M/M/1 system and for the M/M/1PS system and thus the performance measures based on state probabilities are identical for the two systems. When the oered trac A = s is less than one, we show that the steady state probabilities are given by (9.30): p(i) = (1 A) Ai , i = 0, 1, . . . , (10.79) i.e. a geometric distribution with mean value A/(1 A). The mean sojourn time (average response time = time in system) for jobs with duration t becomes: t . (10.80) Rt = 1A If this job was alone in the system, then its holding time would be t. Even if there is no queue, we may talk about an average virtual delay for jobs with duration t: Wt = Rt t A t. 1A The corresponding mean values for a random job (mean service time s) becomes: s R = , 1A = W = (10.81)

(10.82)

A s. (10.83) 1A This shows that we obtain the same mean values as for M/M/1 (Sec. 9.2.4). But the actual mean waiting time becomes proportional to the duration of the job, which is often a desirable property. We dont assume any knowledge in advance about the duration of the job. The mean waiting time becomes proportional to the mean service time. The proportionality should not be understood in the way that two jobs of the same duration have the same waiting time; it is only valid on the average. In comparison with the results we obtained earlier for M/G/1 (Pollaczek-Khintchines formula (10.2)) the results may surprise our intuition. A very useful property of the Processor-Sharing model is that the departure process is a Poisson process like the arrival process , i.e. we have a reversible system. The ProcessorSharing model is very useful for analyzing time-sharing systems and for modeling queueing networks (Chap. 12). In Chap. ?? we study reversible systems in more details.
2010-04-20

Chapter 11 Multi-service queueing systems


In this chapter we consider queueing systems with more than one type of customers. It is analogue to Chap. 7 where we considered loss systems with more types of customers and noticed that the product form was maintained between streams so that the convolution algorithm could be applied. We are only interested in reversible systems, where the departure process is of the same type as the arrival process, in our case Poisson processes. Then we may combine several queueing systems into a network of queueing systems. In queueing terminology, customers of same type (class, service, stream) belong to a specic chain, and a queueing system is a node in a queueing network which will be dealt with in (Chap. 12). In this chapter customers in some way share the available capacity, and therefore they are served all the time but they may obtain less capacity than requested, resulting is an increased service time. The sojourn time is not split up into separate waiting time and service time as in previous chapters, Thus in this chapter and next chapter on queueing networks we use the denitions: Waiting time W is dened as the total sojourn time, including the service time. Queue length L is dened as total number of customers (served & waiting). As an example we may think of the time required to transfer a le in the Internet. If the available bandwidth is at least equal to the bandwidth requested, then the mean service time sj for a customer of type j is dened as the mean transfer (sojourn) time. If the available bandwidth is less than the bandwidth requested, then the mean transfer time W j will be bigger than sj , and the increase W j = W j sj , (11.1) is dened as the mean virtual waiting time. We shall introduce the virtual waiting time as the increase in service time due to limited capacity. In a similarly way we dene the mean virtual queue length as Lj = Lj Aj , (11.2) where Aj is the oered trac of type j.

296

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

The systems considered in this chapter are reversible, but do not have product form. In Sec. 11.1 we consider single server systems with multiple services. The derivations are very simple and worked out in details for two services, and then generalized to more services. In Sec. 11.2 we consider systems with more servers and multiple services. As in Sec. 7.3.3 we choose a Basic Bandwidth Unit (BBU) and split the available bandwidth into n BBUs. The BBU is a common name for a channel, a slot, a server, etc. The smaller the basic bandwidth unit, i.e. the ner the granularity, the more accurate we may model the trac of dierent services, but the bigger the state space becomes. Finally, in Sec. 11.3 we consider waiting time systems with more servers and multiple multi-rate services. In service-integrated systems the bandwidth requested depends on the type of service. The approach is new and very simple. It allows for very general results, including all classical Markovian loss and delay models, and it is applicable to digital broadband systems, for example Internet trac.
1
.................... ................... . . ... ... . ........................ ................. ..... .. ..... ..... 1 1 1 ..... ..... ..... 1 ..... ..... ..... ..... ..... ..... ........... ..... .... ...... ..... ... . .. ..... ..... .. . .. . ...... . ...... . . . .. .............................................. ... . . . . .... .. ........................ .. . . . . . ... . ............................................ . ... .................... ..... ..... . . . .. ..... . ..... . .. . .. ... .. ..... ... ..... .. .. .. ........... ........... ..... ..... ... ..... ..... ..... ..... .... .... ..... 2 ... ... . ........................ ................. ..... ..

A = /

... . .................... ................... .

A2 = 2 /2

Figure 11.1: An 2 Mj /Mj /1queueing system with two classes of customers. j=1

11.1

Reversible multi-chain single-server systems

In Fig. 11.1 we consider a single-server queueing system with N = 2 streams of customers, i.e. two chains. Customers belonging to chain j arrive to the node according to a Poisson arrival process with intensity j (j = 1, 2). State [x1 , x2 ] is dened as a state with x1 chain 1 customers and x2 chain 2 customers. By the notation N Mj /Mj /1 we indicate that we have j=1 N dierent PCT-1 arrival processes (chains) with individual values of arrival rates and mean service times. In the following we use index i for the state space and index j for the service (trac stream). If the number of servers were innite, then we would get the state transition diagram shown in Fig. 11.2, and state probabilities will be given by (7.13). However, the capacity is limited to one server, so somehow we have to reduce the service rates in all states where more than one server is requested.

11.1.1

Reduction factors for single-server system

So far we have only one server (n = 1) which is shared by all customers. In state (x1 , x2 ) we reduce the service rate of chain one customers by a factor g1 (x1 , x2 ) so that the customer

11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS


. . . . . . . . . . . .

297

. . ............... ............... . . .. ............... ...............

 

. . . .. .. . . .. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . ... .. .. . .

x1 1, x2
. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . .

. ................................................................................... .................................................................................. . . .. ................................................................................... ...................................................................................

 

 

. . .. . . .. . .. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . ... .. . . . .

x1 , x2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . . .

x1 1

x2 2

............... ............... . . . ............... ............... . ..

 

x1 1, x2 1
. . . .. .. . .. . . . . . . . . . . . . . . . .

.. . ................................................................................... .................................................................................. . ... .. ................................................................................... ....................................................................................

 

 

. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.. ............... ............... 
.. ............... .............. .

x2 2

. . .

. . . . . . . . . . . . . . . . . . .. .. .. . .

x1 1

x1 , x2 1
. . .. . .. . .. . . . . . . . . . . . . . . . .

............... .............. . .. .. ............... ...............

. . .

. . .

. . . . . . . . . . . . . . . . . . .. .. .. . . .

. . .

Figure 11.2: State transition diagram for the system in Fig. 11.1 with two classes (chains) of customers and innite number of servers (cf. Chap. 7).
. . . . . . . . . . . .

............... ............... . . . . .. ............... ...............

 

. . . .. .. . . .. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .. . . .. .. . .

x1 1, x2
. . . .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . .

. ................................................................................... .................................................................................. . .. .. ................................................................................... ...................................................................................

 

. . .. . . .. .. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .. .. . .. . .

x1 , x2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . . .

............... .............. . .. .. ................ .... .......... .

g1 (x1 , x2 ) x1 1

g2 (x1 1, x2 ) x2 2


. ............... ............... . . ............... ............... . ..

 

x1 1, x2 1
. . . .. .. . . . . . . . . . . . . . . . . . .

.. . ................................................................................... .................................................................................. . ... ................................................................................... .................................................................................... ..

. . .

. . . . . . . . . . . . . . . . . .. . .. .. . . .

 

. . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

g2 (x1 , x2 ) x2 2


g1 (x1 , x2 1) x1 1

x1 , x2 1
. . .. .. .. . . . . . . . . . . . . . . . . .

............... .............. . .. .. ............... ...............

. . .

. . .

. . . . . . . . . . . . . . . . . .. . .. .. . . .

. . .

Figure 11.3: State transition diagram for the system in Fig. 11.1 with two types (chains) of customers and a single server. In state (x1 , x2 ) the requested service rate xj j for type j is reduced by a factor gj (x1 , x2 ) for chain j (j = 1, 2) as compared with Fig. 11.2. As for example g2 (x1 1, x2 ) and g2 (x1 , x2 ) are dierent, the system does not have product form.

298

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

dont get x1 servers, but only x1 g1 (x1 , x2 ) servers which in general will be a non-integer number of servers. So the service rate is reduced from xj j to g1 (x1 , x2 ) xj j . In a similar way, chain-two customers are reduced by a factor g2 (x1 , x2 ). The result is shown in Fig. 11.3. The aim is to obtain a reversible multi-dimensional system. For x1 + x2 n the system is similar to the models in Chap. 7. For x1 + x2 n we construct a reversible system using all n servers. The reduction factors gi (x1 , x2 ) can be specied for various parts of the state transition diagram as follows. 1. Non-feasible states: x1 < 0 and/or x2 < 0 : gj (x1 , x2 ) = 0 , j = 1, 2. (11.3)

The reduction factors are undened for these states which have probability zero. By choosing the value zero, the recursion formul derived below (11.9, 11.10) are correctly initiated. 2. States with demand less than capacity: {xj 0, j = 1, 2} and {0 < x1 + x2 1} : gj (x1 + x2 ) = 1 , j = 1, 2. (11.4)

Every call get the capacity required and there is no reduction of service rates. 3. States with only one service: x2 = 0 and x1 1 : x1 = 0 and x2 1 : g1 (x1 , 0) = 1/x1 , g2 (0, x2 ) = 1/x2 , x1 1 , x2 1 . (11.5) (11.6)

Along the axes we have a classical M/M/1-system with only one type of customers, and we assume customers share the capacity equally as they all are identical. The state transition diagram is as for M/M/1PS (PS = Processor sharing). 4. States with demand bigger than capacity: {xj > 0, j = 1, 2} and {x1 + x2 > 1}. This is states with both types of customers requiring more servers than available. If possible, we want to choose gj (x1 , x2 ) so that: Flow balance: The state transition diagram is constructed to be reversible: We consider four states including (x1 , x2 ) and neighboring states below (Fig. 11.3): {(x1 1, x2 1), (x1 , x2 1), (x1 , x2 ), (x1 1, x2 )} . By applying the Kolmogorov cycle requirement for reversibility (Sec. 7.2), we get after canceling out the arrival and service rates (Fig. 11.3): g2 (x1 , x2 ) g1 (x1 , x2 1) = g1 (x1 , x2 ) g2 (x1 1, x2 ) . (11.7)

11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS

299

Normalization: All capacity is used. This requirement implies for n = 1 server: x1 g1 (x1 , x2 ) + x2 g2 (x1 , x2 ) = 1 , x1 + x2 1 , (11.8) In state [x1 , x2 ] we would like to use x1 + x2 servers, but this is reduced to one server by the reduction factors (11.8). We have two independent equations (11.7) (11.8) with two unknown reduction factors. Assume that we know the reduction factors g1 (x1 , x2 1) and g2 (x1 1, x2 ), then we are able to nd a unique solution for the reduction factors g1 (x1 , x2 ) and g2 (x1 , x2 ) (Fig. 11.3). Solving the equations, we get: g1 (x1 , x2 ) = 1 1 g1 (x1 , x2 1) = , g2 (x1 1,x2 ) x1 g1 (x1 , x2 1) + x2 g2 (x1 1, x2 ) x1 + x2 g1 (x1 ,x2 1) 1 g2 (x1 1, x2 ) = x1 g1 (x1 , x2 1) + x2 g2 (x1 1, x2 ) x1 1
g1 (x1 ,x2 1) g2 (x1 1,x2 )

(11.9)

g2 (x1 , x2 ) =

+ x2

. (11.10)

From the initial values specied above (11.311.6), we may by these recursion formul calculate all reduction factors. From g1 (1, 0) and g2 (0, 1) we calculate g1 (1, 1) and g2 (1, 1). Then we may calculate g1 (2, 1) and g2 (2, 1), and in this way we horizontally calculate all reduction factors for g1 (x1 , 1) and g2 (x1 , 1). From these we may then calculate all g1 (x1 , 2) and g2 (x1 , 2) reduction factors, and so on. Alternatively, we may use the recursion vertically or diagonally. We notice that the reduction factors are independent of the trac parameters. Using the known initial values we nd a simple unique solution: gj (x1 , x2 ) = 1 , x1 + x2 x1 + x2 1 , j = 1, 2 . (11.11)

Thus the two chains (services) are reduced by the same factor and all customers share the capacity equally. The reversible state transition diagram is shown in Fig. 11.4. It is easy to extend the above derivations of reduction factors to a system with N trac streams. The state of the system is given by : x = (x1 , x2 , . . . , xj1 , xj , xj+1 , . . . , xN ) . (11.12)

where xi denotes number of channels occupied by stream i, which for single rate trac is equal to number of connections. For states { N xj > n, xj 0} we get for n = 1 server a j=1 simple unique expression for the reduction factors, which is a generalization of (11.11): gj (x) = 1
N j=1

xj

j = 1, 2, . . . , N .

(11.13)

Thus all customers share the single server equally. In the following section we show this unique state transition diagram can be interpreted as corresponding to various queueing strategies.

300
. . . . .. . .. . . . . . . . . . . . . . . . . .

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS


. . .
. . . . . . . . . . . . . . . . . ... . .. .. . .

. . .

. . ............... ............... . . .. ............... ...............

 

x1 1, x2
. . . .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . .

. ................................................................................... .................................................................................. . . .. ................................................................................... ...................................................................................

 

 

. . . . . .. .. . . . . . . . . . . . . . . . . .

. . .

. . . . . . . . . . . . . . . . . ... . .. . . . .

. . .

x1 , x2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . .

.. ............... .............. . .. ............... ...............

1 x1 +x2

x1 1

1 x1 +x2 1

x2 2

. ............... ............... . . ............... ............... . ..

 

x1 1, x2 1
. . . .. .. .. . . . . . . . . . . . . . . . .

.. . ................................................................................... .................................................................................. . ... ................................................................................... .................................................................................... ..

 

 

. . .. .. . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 x1 +x2

x2 2

. . .

. . . . . . . . . . . . . . . . ... . .. .. . . .

1 x1 +x2 1

x1 1

x1 , x2 1
. . .. . .. .. . . . . . . . . . . . . . . . .

............... .............. . .. .. ............... ...............

. . .

. . .

. . . . . . . . . . . . . . . . ... . .. . . . .

. . .

Figure 11.4: State transition diagram for a multidimensional single-server system which is reversible. The system does not have product form.

11.1.2

Single-server Processor Sharing (PS) system

The above result corresponds to a Processor Sharing (PS, Sec. 10.7) system. All (x1 + x2 ) customers share the server equally and the capacity of the system is constant (one server). The total service rate x1 ,x2 in state [x1 , x2 ] becomes (Fig. 11.4): x1 ,x2 = x2 2 x1 1 + x2 2 x 1 1 + = . x1 + x2 x1 + x2 x1 + x2 (11.14)

The total service rate is state-dependent when classes of customers have dierent service rates. The number of customers served per time unit depends on the mix the customers currently being served. For a system with N trac streams the total service intensity in state x is: N j=1 xj j x = , j = 1, 2, . . . , N . (11.15) N j=1 xj This model is reversible and valid for individual arbitrary service times distributions, and the system will be insensitive to the service time distributions. This property is called the magic property of processor sharing and was originally dealt with by Kleinrock (1964 [73]). In Sec. 10.7 we had only one type of customers and a one-dimensional state transition diagram. Now we have N types of customers, and to dene the state of the system in a unique way we need an N -dimensional state transition diagram.

11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS

301

Theorem 11.1 j Mj /Gj /1PS single-server system with processor sharing (PS) is reversible and insensitive to the service time distributions, and each class may have individual mean service time.

11.1.3

Non-sharing single-server system

Let us assume that the server is occupied by one customer at a time, i.e. there is no sharing of the capacity. Then for Poisson arrival processes and classical queueing systems with disciplines as for example FCFS, LCFS, SIRO the customer being served in state x, i.e. the next customer departing, will be a random one of the x customers in the system. From the state transition diagram for two services (Fig 11.4) we see that the customer being served is of type one, respectively type two, with the following probabilities: p{type-1 served} =
x1 1 x1 +x2 x1 1 2 + xx2+x2 x1 +x2 1 x2 2 x1 +x2 x1 1 2 + xx2+x2 x1 +x2 1

x1 1 , x1 1 + x2 2 x2 2 . x1 1 + x2 2

(11.16)

p{type-2 served} =

(11.17)

We see that this is only a random one of the x1 + x2 customers when 1 = 2 . Thus the two classes must have the same mean service time for the state transition diagram to describe an M/M/1 non-sharing system. In all other cases (1 = 2 ), the customer being served will not be a random one among the (x1 + x2 ) customers in the system. It is also obvious that the system is only reversible when the service times are exponentially distributed, as the inter-departure time during saturation periods will be equal to the service time distribution. This interpretation corresponds to a classical M/M/1 system with total arrival = 1 + 2 and mean service time 1 = 1 = 1 . 2 1 By superposition of Poisson processes it is obvious that this is also valid for N trac streams. We thus have: Theorem 11.2 The non-sharing j Mj /M/1 system (FCFS, LCFS, SIRO) is only reversible if all customers have the same exponentially distributed service time with same mean service. time.

11.1.4

Single-server LCFS-PR system

The state transition diagram in Fig. 11.4 can also be interpreted as the state transition diagram of an j Mj /Gj /1LCFSPR (preemptive resume, non-sharing) system. It is obvious

302

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

that this system is reversible because the process follows exactly the same path in the state transition diagram away from state zero due to arriving customers, as back to state zero due to departing customers. Thus we always have local balance. The latest arriving customer in state (x1 , x2 ) belongs with probability (xj /(x1 + x2 )) to class j (j = 1, 2). This is valid for any number N of services. Theorem 11.3 j Mj /Gj /1LCFS-PR single-server system with LCFS-PR is reversible and insensitive to the service time distributions, and the services may have individual mean service times.

11.1.5

Summary for reversible single server systems

The multi-dimensional state transition for single-server systems can be interpretated in the same way as for two services. In conclusion, for a single-server queueing systems with N classes of customers to be reversible, the state transition diagram must be as shown in Fig. 11.4 in N dimensions. For this diagram we have the following interpretations: N Mj /Gj /1PS, j=1 N Mj /M/1 non-sharing with same exponential service time for all customers, or j=1 N Mj /Gj /1LCFSPR (non-sharing). j=1 These systems are also called symmetric queueing systems. Reversibility implies that the departure processes of all classes are identical with the arrival processes. In principle we may introduce new interpretations. Due to reversibility, the departure process will be the same type as the arrival process for each chain, i.e. a Poisson process like the arrival process. This is of course also valid for a system with one type of customers, as we may split the Poisson arrival process up into more Poisson arrival processes and thus get a reversible multi-dimensional system.

11.1.6

State probabilities for multi-services single-server system

All three single-server systems mentioned above are interpretations of the same state transition diagram and thus have the same state probabilities and mean performance measures. Part of the state transition diagram for two services is given by Fig. 11.4. The diagram is reversible, since ow clockwise equals ow counter-clockwise. Hence, there is local balance. All state probabilities can be expressed by state zero. For two services we nd: p(x1 , x2 ) = p(0, 0) Ax1 Ax2 1 2 (x1 + x2 )! x1 ! x2 ! (11.18)

11.1. REVERSIBLE MULTI-CHAIN SINGLE-SERVER SYSTEMS

303

In comparison with the multi-dimensional ErlangB formula (7.10) we now have the additional factor (x1 +x2 )!. The product form between classes is lost because the state probability cannot be written as the product of state probabilities of two independent systems: p(x1 , x2 ) = p1 (x1 ) p2 (x2 ) . This absence of product form will later complicate the evaluation of queueing networks as the state space of a node becomes very large and cannot be aggregated. We nd p(0, 0) by normalization: p(x1 , x2 ) = 1 .
x1 =0 x2 =0

Using the Binomial expansion we nd the aggregated state probabilities: p(x1 + x2 = x) = p(0, 0) (A1 + A2 )x = (1 A) Ax , (11.19) (11.20)

where A = A1 + A2 . State probability p(0, 0) = 1 A is obtained explicitly without need of normalization. This is identical with the state probabilities of M/M/1 with the oered trac A = (A1 +A2 ) (9.30). If there are N dierent trac streams, the state probabilities become: Ax1 Ax2 AxN 1 2 N (x1 + x2 + . . . + xN ) ! p(x1 , x2 , . . . , xN ) = p(0) x1 ! x2 ! xN !
N N

(11.21)

xj ! Aj j
x

p(x) = p(0)

j=1

j=1 N

, xj !

(11.22)

j=1

where p(x) = p(x1 , x2 , . . . , xN ). This can be expressed by the polynomial distribution (2.94):
N

p(x) = p(0)

Aj j
j=1

x 1 + x2 + + xN x1 , x2 , . . . , xN

(11.23)

For an unlimited number of queueing positions the global state probabilities of the total number of customers becomes: p(x) = p{x1 + x2 + + xN = x} . By the polynomial expansion we observe that the state probabilities are identical with state probabilities of the M/M/1 system: p(x) = p(0) (A1 + A2 + + AN )x = (1 A) Ax , (11.24) (11.25)

304

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

11.1.7

Generalized algorithm for state probabilities

To evaluate the global state probabilities we may use the following trivial algorithm for N trac classes. We rst nd the relative state probabilities q where we typically put q(0) equal to one: x < 0, 0 1A x = 0, (11.26) p(x) = N pj (x) x = 1, 2, . . . ,
j=1

where

pj (x) =

x < 1,

Aj p(x 1) x = 1, 2, . . . ,

(11.27)

In this case the normalization of global states is very simple as we have p(0) = 1 A.

11.1.8

Performance measures

The mean queue length L, which includes all customers in the system (partly being served, partly waiting), becomes as for M/M/1. This is a geometric distribution (11.25) with the mean value: A L= , 1A where the total oered trac is A = (A1 + A2 + + AN ). In state x the average number of classj calls is x pj (x). The mean queue length for stream j (including all customers) becomes: Lj =
x=0

x pj (x) =

Aj L, A
N

or

Lj Aj

L = A

where

L=
j=1

Lj .

(11.28)

The mean sojourn time for type j customers becomes by Littles law: Wj = or Lj L = sj , j A (11.29)

Wj L = . sj A

11.2. REVERSIBLE MULTI-{CHAIN & SERVER} SYSTEMS

305

The mean queue length L includes both waiting and served customers. As carried trac is equal to oered trac, the increase in L due to limited capacity is L = L A. For stream j the increase is Lj = Lj Aj . In the same way we have for queue lengths W = W s and Wj = W j sj . Subtracting one on both sides of (11.43) and (11.44) we get: Wj L W Lj = = = = constant. Aj sj A s (11.30)

Lj and WJ corresponds to the the usual denition of waiting times in non-sharing queueing systems. For a given stream the mean waiting time is proportional to the mean service time of this stream. This is the most important property of processor sharing.

11.2

Reversible multi-{chain & server} systems

We now consider a system with n servers and innite queue. All customers only request one server (BBU, channel) to be served. The state of the system is dened by x = (x1 , x2 , . . . , xj , . . . , xN ) , where xj is the number of type j customers in the system. Customers of type j arrive according to a Poisson arrival process with intensity j , and the service time is exponentially distributed with intensity j (mean value 1/j ) (j = 1, 2, . . . , N ). If the number of servers were innite, then we would get the state transition diagram shown in Fig. 11.2. However, the capacity is limited to n servers, so we have to reduce the service rates in all states requiring more than n servers (overload). In the following we deal with the general case with N services. The principles are the same as for the above single-server system.

11.2.1

Reduction factors for multi-server systems

The service rate in state x = (x1 , x2 , . . . , xj , . . . , N ) is for type j customers reduced by a factor gj (x). The reduction factors gj (x) are chosen so that we maintain reversibility and utilize all the capacity when needed. They can be specied for various parts of the state transition diagram as follows. 1. Non-feasible states: xj 0 for at least one value j {1, 2, . . . , N }: gj (x) = 0 , j = 1, 2, . . . , N . (11.31)

The reduction factors are undened for these states which have probability zero. By choosing the value zero, the recursion formula derived below (11.35) is initiated in a correct way.

306

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS


N j=1

2. States with demand less than capacity: {xj 0 j} and {0 gj (x) = 1 , j = 1, 2, . . . , N .

xj n} : (11.32)

Every call get the capacity required, and there is no reduction of the requested service rate. 3. States with only one type of customers: xi = 0 i = j and xj n: gj (x) = n . xj (11.33)

Along the axes we have a classical M/M/nsystem with only one type of service, and we assume that the calls share the capacity equally as they all are identical. 4. States with more types of customers, in total requiring more than n channels:
N

xj 0 j and x =

xj > n :
j=1

Flow balance: the state transition diagram is required to be reversible: We consider four neighboring states in a square below state (x1 , . . . , xj , . . . , xk , . . . xN ) keeping number of connections constant except for services j and k: (Fig.11.3): (x1 , . . . , xj 1, . . . , xk , . . . xN ) (x1 , . . . , xj 1, . . . , xk 1, . . . xN ) (x1 , . . . , xj , . . . , xk , . . . xN ) (x1 , . . . , x1 , . . . , xk 1, . . . xN )

By applying the Kolmogorov cycle requirement for reversibility (Sec. 7.2) for any pair of services we get after reduction (Fig. 11.3): A necessary and sucient condition for reversibility (Kingman 1969) is that all two-dimensional ow paths are in equilibrium. In total we may choose j and k in so many ways: N N (N 1) = 2 2 For each pair we have a balance equation. We assume that we know the reduction factors for states x 1j below state x where x 1j = {x1 , x2 , . . . xj1 , xj 1, xj+1 . . . xN }. To nd the N reduction factors in state x = {x1 , x2 , . . . , xN } we need N independent equations. We may choose Kolmogorov cycles for the two-dimensional planes {1, j}, (j = 2, 3, . . . , N ), and this gives us N 1 independent equations. We get the following ow balance equations for j = 1, 2, . . . N : g1 (x) gj (x 11 ) = gj (x) g1 (x 1j ) ,

11.2. REVERSIBLE MULTI-{CHAIN & SERVER} SYSTEMS or where gj (x) = g1 (x) g1,j (x) , g1,j (x) = We notice that g1,1 (x) = 1. gj (x 11 ) . g1 (x 1j )

307

(11.34)

Normalization: We obtain one equation more by requiring that we use the total capacity n:
N

n =
j=1 N

xj gj (x)

=
j=1

xj g1 (x) g1,j (x) .

From this we get g1 (x), and from (11.34) we then nd all other reduction factors in state x: n j = 1, N xj g1,j (x) j=1 gj (x) = (11.35) g1 (x) g1,j (x) , j = 2, 3, . . . , N .

We know reduction factors for all states x up to and including global state n, i.e. states where x = N xi n. We also know all reduction factors for states where only one type is i=1 present, We can then recursively calculate all other reduction factors. Knowing the reduction factors we nd the relative state probabilities, and nally by normalization the detailed state probabilities, This is equivalent to calculation of all relative state probabilities, and then by global normalization the detailed state probabilities. For two trac streams and a single server we get of course the reduction factors given in (11.9) and (11.10). As seen above, the reduction factors are independent of the trac processes, and the approach includes Engset trac, Pascal trac, and any state-dependent Poisson arrival process. Using the above initialization values it can easily be shown that we get the following unique solution: 0 x n, 1 gj (x) = (11.36) n n x, x
N

where x =
j=1

xj ,

xj 0 .

308

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

Thus during overload all customers are reduced by the same factor, and the customers share the capacity equally. In Fig. 11.5 we consider a multi-server queueing system with N = 2 trac streams (chains). We notice that the diagram is reversible. In the following section we show this unique state transition diagram may be interpreted as corresponding to dierent strategies.

11.2.2

Generalized processor sharing (GPS) system

The state transition diagram in Fig. 11.5 can be interpreted as follows. In states [x1 , x2 ] below saturation (x1 + x2 n) every user occupy one server. Above saturation all users share the available capacity equally. The state transition diagram Fig. 11.5 is reversible. It is insensitive to the service time distribution and each service may have individual mean service time. This model is called the GPS (Generalized Processor Sharing) model. For state x1 + x2 > n, trac stream one wants a total service rate x1 1 , and trac stream two wants a service rate x2 2 . But the service rate of both streams are reduced by the same factor n/(x1 + x2 ).

Theorem 11.4 N Mj /Gj /nGPS multi-server system with generalized processor sharing j=1 (GPS) is reversible and insensitive to the service time distributions, and each class may have individual mean service time.

11.2.3

Non-sharing multi-{chain & server} system

We consider M/M/nnon-sharing systems. A customer being served always has one server, A customer is either waiting or served. To maintain reversibility for x1 + x2 > n we have to require that all services have the same mean service time 1 = 1 , which furthermore must j be exponentially distributed. Otherwise, the next departing customer will not be a random one among the customers in the system (Fig. 11.5). The proof is the same as for the single server case in Sec. 11.1.3. This corresponds to an M/M/n system with total arrival rate = j j and service rate . The state probabilities are given by (9.2) and (9.4), and the state transition diagram is reversible. The system M/M/ may be considered as a special case of M/M/n and this has already been dealt with in connection with classical waiting systems (Chap. 9).

Theorem 11.5 The N Mj /M/n system (FCFS, LCFS, SIRO) is only reversible if all cusj=1 tomers have the same mean service time, and this service time must be exponentially distributed.

11.2. REVERSIBLE MULTI-{CHAIN & SERVER} SYSTEMS

309

11.2.4

Symmetric queueing systems

For multiple servers the non-sharing system N Mj /Gj /nLCFS-PR will in general not be j=1 reversible, because the last arriving customer may not be the rst to nish service because there are more servers working in parallel. If all streams have same mean holding time this system will be included in Theorem 11.5. Otherwise, it is only reversible for single-server systems (Sec. 11.1.4). In conclusion, multi-server queueing systems with more classes of customers will only be reversible when the system is one of the following queueing system: N Mj /Gj /nGPS, which includes N Mj /Gj /1PS, j=1 j=1 N Mj /Mj /nnon-sharing with same service time for all customers, which includes j=1 the single server system, N Mj /Gj /1LCFSPR. This is only valid for single-server systems. j=1 These systems are all reversible, and they are also called symmetric queueing systems. Reversibility implies that the departure processes of all classes are Poisson processes like the arrival processes. For the classical non-sharing M/M/n systems we have a reversible system which in not insensitive.

11.2.5

State probabilities

State p(0, 0, . . . 0) is obtained by normalization. For n = 1 we of course get (11.22). We let: p(x) =
xi =x

For a node with N services and n servers we exploit local balance and get the following detailed state probabilities: x1 x2 AxN A1 A2 N , N xj n j=1 xN ! p(x1 , x2 , . . . , xN ) x1 ! x2 ! = (11.37) Ax1 Ax2 p(0, 0, . . . 0) AxN (x1 +x2 + . . . +xN )! 1 2 N , N xj n j=1 x1 ! x2 ! xN ! n! n(x1 +x2 +...+xN )n p(x1 , x2 , . . . xj , . . . xN ).

By the multinominal theorem (2.96) we get (9.2): x A , 0 i n, p(x) x! = p(0) Ax , i n. n! n xn

(11.38)

310

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

 1  1    ..  . .. . . ............... ............... . .................................................. .............................................. .. .................................................. ................................................. . . . . .. ............................... x1, nx+1.... ............................................................................................. x, nx+1 .... .............................................................................................x+1, nx+1.............................. . . . ... ... .. .. .. . .............................. . . n  .... .   .... .  . . . . . . n . . . . .. . . . .. .. .. . . . x 1 (x+1) 1 ... ... ... . . . . . . . . . . . . . . . . . . n+1 n+2 . . . . . . . . . . . 2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . . .

. .. .. . ... . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . ... . .. .. . . .

. . . . .

. . . .. . ... . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . ... . .. .. . .

. . . . .

. .. . .. . .. . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . ... . .. .. . .

. . . . .

  1 ..  . . . .................................................. ................................................. . .. ............................... x1, nx .................................................................................................. x, nx . ............ ............ .. . .......  .. .   ..... . . . . . .. . . .. . . . . . x 1 ... ... . . . . . . . . . . . . . . 2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . . .

(nx+ 1) 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . .

n n+1

(nx+ 1) 2  1

. ................................................. ................................................ .

  1  1   ..  . .. . .. . . . ............... .............. . ................................................. ................................................ .................................................. ................................................. .. .. ... .............................. x1, nx1.................................................................................................. x, nx1 ..................................................................................................x+1, nx1.............................. . . . .. .. .. .. .. . .............................. . .   ..... .   ..... .  . . . . . . . .. . . . . . . . .. . .. . . . x 1 . . . .. ... .. . . . (x+1) 1 . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . ... .. .. . . .

(nx) 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . .

  ............... ............... . . . x+1, nx ............................. . .............................................. ............................................... .... .. . .. .   ..... .  . n . . .. . . .. . . . . . n+1 (x+1) 1 . . . . 2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . .

n n+2

(nx+ 1) 2

(nx) 2

n n+1

(nx) 2

. . .

. . .

. . . . . . . . . .

. . . . ... .. .. . . .

. . .

. . .

. . . . . . . . . .

. . . . ... .. .. . .

. . .

Figure 11.5: State transition diagram for a reversible multidimensional j Mj /Mj /nsystem. The detailed states shown, correspond to global states below and above global state n.

11.2. REVERSIBLE MULTI-{CHAIN & SERVER} SYSTEMS

311

11.2.6

Generalized algorithm for state probabilities

We now consider a system with n servers and N trac streams. The global relative states probabilities are obtained by the recursion: 0 1 x < 0, x = 0, qj (x) x = 1, 2, . . . , , (11.39)

q(x) =

i=j

where

Here pj (x) is the contribution of stream j to global state x: pj (x) =


xi =x

Aj q(x 1) x qj (x) = Aj q(x 1) n

x n, x > n.

(11.40)

xj p(x1 , x2 , . . . xj , . . . xN ) x

(11.41)

State probability p(0) is obtained by the normalization: Q=


i=0

p(i) =

pj (i) .

(11.42)

i=0 j=1

By normalizing all relative state probabilities qj (x) and q(x) by Q we get the true state probabilities pj (x) and p(x). To get a numerical robust algorithm, the normalization should be carried out in each step (increase of x) as described in Sec. 4.4.1. The algorithm is a modication of the generalized algorithm in Sec. 7.6.2 for single-slot Poisson trac in loss systems.

11.2.7

Performance measures

These are derived in the sane way as for single-server system in Sec. 11.1.8. The total mean queue length L, which includes all customers in the system (partly being served, partly waiting), becomes as for M/M/n. L=
x=0

x p(x)

312

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

In state x the average number of classj calls is x pj (x). The mean queue length for stream j (including all customers) becomes: Lj =
x=0

x pj (x) =

Aj L, A
N

or

Lj Aj

L = A

where

L=
j=1

Lj .

(11.43)

The mean sojourn time for type j customers becomes by Littles law: Wj = or Wj L = . sj A (11.44) Lj L = sj , j A

The mean queue length L includes both waiting and served customers. As carried trac is equal to oered trac, the increase in L due to limited capacity is L = L A. For stream j the increase is Lj = Lj Aj . In the same way we have for queue lengths W = W s and Wj = W j sj . Subtracting one on both sides of (11.43) and (11.44) we get: Lj Wj L W = = = constant. Aj sj A s (11.45)

Lj and WJ corresponds to the the usual denition of waiting times in non-sharing queueing systems. For a given stream the mean waiting time is proportional to the mean service time of this stream. This is the most important property of processor sharing.

11.3

Reversible multi-{rate & chain & server} systems

We now consider a queueing system with n servers which is oered N multi-rate trac streams. Trac stream j has constant arrival rate j , service rate j (mean service time 1/j , and requires dj simultaneous channels for full service. If the demand is bigger than the capacity, then the service rate is reduced by a state dependent reduction factor. Whereas the systems with single-rate trac considered above were simple, this system becomes more complex because the reduction factors becomes much more complex.

11.3. REVERSIBLE MULTI-{RATE & CHAIN & SERVER} SYSTEMS

313

11.3.1

Reduction factors

These are derived in a similar way as for single-slot trac (Sec. 11.1.1 and 11.2.1). The service rate in state x = (x1 , x2 , . . . , xj , . . . , xN ) is for type j customers reduced by a factor gj (x). The reduction factors gj (x) are chosen so that we maintain reversibility, and use all capacity needed. They can be specied for various parts of the state transition diagram as follows. 1. Non-feasible states: xj 0 for at least one value j {1, 2, . . . , N }: gj (x) = 0 , j = 1, 2, . . . , N . (11.46)

The reduction factors are undened for these states which have probability zero. By choosing the value zero, the recursion formula derived below (11.7) is initiated in a correct way. 2. States with demand less than capacity: {xj dj 0 j} and {0 gj (x) = 1 , j = 1, 2, . . . , N .
N j=1

xj dj n} : (11.47)

Every call get the capacity required and there is no reduction of the requested service rate. 3. States with one type of customers only: {xi = 0 i = j} and {xj n}: gj (x) = n , xj j = 1, 2, . . . , N . (11.48)

Along the axes we have the classical M/M/nsystem with only one service, and we assume that the calls share the capacity equally as they are all identical. 4. States with more types of customers, in total requiring more than n channels:
N

xj 0 j and x =

j=1

xj d j > n :

Flow balance: The state transition diagram is reversible: If we consider four neighboring states in a square below state (. . . , xj , . . . , xk , . . .) keeping all other dimensions except i and j constant (Fig.11.6): (x1 , . . . , xj dj , . . . , xk , . . . xN ) (x1 , . . . , xj dj , . . . , xk dk , . . . xN ) (x1 , . . . , xj , . . . , xk , . . . xN ) (x1 , . . . , xj , . . . , xk dk , . . . xN )

then by applying the Kolmogorov cycle requirement for reversibility (Sec. 7.2) for any pair of services we get after reduction (Fig. 11.6):

314

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS A necessary and sucient condition for reversibility (Kingman 1969) is that all two-dimensional ow paths are in equilibrium. In total we may choose: N 2 = N (N 1) 2

dierent cycles and thus dierent balance equations. We assume that we know the reduction factors for states x dj below state x. To nd the N reduction factors in state x = {x1 , x2 , . . . , xN } we need N independent equations. Thus we may choose Kolmogorov cycles for the two-dimensional planes {1, j}, (j = 1, N ) which yields N 1 independent equations. We furthermore have the normalization equation requiring that the total capacity used is n. We get the following ow balance equations for j = 1, 2, . . . N : g1 (x) gj (x d1 ) = gj (x) g1 (x dj ) or gj (x) = g1 (x) g1,j (x) where g1,j (x) = We notice that g1,j (x) = 1. gj (x d1 ) g1 (x dj ) (11.49)

Normalization: The capacity normalization equations is:


N

n =
j=1 N

xj gj (x)

=
j=1

xj g1 (x) g1,j (x)

From this we get g1 (x), and from (11.49) we then nd all other reduction factors in state x: g1 (x) = n
N j=1

xj g1,j (x) j = 2, 3, 4, . . . , N

gj (x) = g1 (x) g1,j (x) ,

As we know all reduction factors up to n where x = N xi di n and all reduction factors i=1 for states where only one service is active, then we can recursively calculate all reduction factors. This is equivalent to calculating the relative state probabilities, and thus by global normalization the detailed state probabilities.

11.3. REVERSIBLE MULTI-{RATE & CHAIN & SERVER} SYSTEMS

315

For two trac streams and single-slot trac we get the reduction factors given in (11.9) and (11.10). As mentioned above the reduction factors are independent of the trac processes, and the approach includes Engset trac, Pascal trac, and any state-dependent Poisson arrival process. We notice that the reduction factors are independent of the trac parameters. In Fig. 11.6 we consider a multi-server queueing system with N = 2 types of customers (chains).
. . . . . . . . . . . .

............... ............... . . . . .. ............... ...............

 

. . . .. .. . .. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .. . . .. .. . .

x1 d1 , x2
. . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . .

. ................................................................................... .................................................................................. . . .. ................................................................................... ...................................................................................

 

. . .. . . .. . .. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .. . .. . .. . .

x1 , x2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .

............... .............. . .. .. ............... ...............

g1 (x1 , x2 ) x1 1

g2 (x1 d1 , x2 ) x2 2


. . ............... ............... . ............... ............... . ..

 

x1 d1 , x2 d2...................................................................................................................................................................... x1 , x2 d2
. . . . .. . .. . . . . . . . . . . . . . . . .

... . .................................................................................... ................................................................................... .. ... ...

. . .

. . . . . . . . . . . . . . . . . . .. . .. .. . .

 

. . .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

g2 (x1 , x2 ) x2 2


............... .............. . .. .. ............... ...............

g1 (x1 , x2 d2 ) x1 1

. . .

. . . . .. . .. . . . . . . . . . . . . . . . .

. . .

. . . . . . . . . . . . . . . . . . .. . .. . . . .

. . .

Figure 11.6: State transition diagram for a system with two types (chains) of customers with multi-rate trac and a n server. In state (x1 , x2 ) the requested service rate xj j for type j is reduced by a factor gj (x1 , x2 ). As for example g2 (x1 1, x2 ) and g2 (x1 , x2 ) will be dierent, the system do not have product form.

11.3.2

Generalized algorithm for state probabilities

We now consider a multi-rate system with n servers and N trac streams. The initialization values of pj (x) are {pj (x) = 0, x < dj }. This is a simple general recursion formula covering all classical Markovian queueing models.

Recursion
For states x we nd the following, where we may replace q by p. If we know all (relative)

316

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

probabilities {0, 1, . . . , x 1}, then we nd the relative state probabilities for state x from: qj,y (x) = 1 (dj Aj ) q(x dj ) x

1 qj (x) = min {x, n}

dj (j ) q(x dj ) + x

i=1

x di (i ) qj (x di ) x

qj,l (x) = qj (x) qj,y (x)


N

qy (x) =
j=1 N

qj,y (x)

ql (x) =
j=1 N

qj,l (x)

q(x) =
j=1

qj (x) = qy (x) + ql (x)

The new absolute state probabilities are obtained by normalizing all previous state probabilities 0, 1, . . . (x 1) and the new relative state probabilities x by dividing by 1 + q(x). Initializing: no problem (let eventually qj (x) = qj,y (x) for x n. qj,y (x) is initialized as in the old system.

Proof of recursion
The above equation for qj (x) becomes very simple now. A simple draft proof is as follows. We rewrite the formula to: qj (x) min {x, n} = dj (dj Aj ) q(x dj ) + x
N

i=1

x di (di Ai ) qj (x di ) x

Left hand side is the ow down from state x due to departures (service rate is min {x, n}). Right hand side term one: This is new contribution to qj (x) because a new call type j arrives. Arrival rate is with our

11.3. REVERSIBLE MULTI-{RATE & CHAIN & SERVER} SYSTEMS

317

denitions dj Aj (choosing j = 1). A new call type j adds dj slots so that the ratio of type j slots in state x is dj /x. Slots of type j already present when a call arrive is taken account of by:

Right hand side term two: Already existing slots of type j in state x di is given by qj (x di ) . So if a call type i arrives, then these x di slots are transferred to the new state x so that the ratio of type j slots in state x becomes (x di )/x.

Performance measures
Carried trac in state x for type j: yj (x) = x qj,y (x) Total carried trac type j:
n+k

yj =
i=0

yj (i)

Queue length in state x for type j: lj (x) = x qj,l (x) Total queue length trac type j:
n+k

lj =
i=0

lj (i)

When we are in state x, then the mean number of channels serving type j calls is: nj,y (x) = pj,y (x) x pj (x) pj,l (x) x pj (x)

and the mean queue length of type j calls measured in [channels] is: nj,l (x) = Of course we have:
N

j=1

{nj,y (x) + nj,l (x)} = x

318
. . . .. . ... . . . . . . . . . . . . .

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS


. . . . .
. . . . . . . . . . . . .. . . .. .. . .

(S (S 1 x+1) 1 ....   1 x) 1 .    . ............... ............... . ................................................. ................................................. ................................................. ................................................. . . . . . . . .............................. x1, nx+1.... ............................................................................................. x, nx+1 ..................................................................................................x+1, nx+1.............................. .. ............... ............... .. ... .. .. .. . .  ..... . n  ..... .   ..... .  . . . . n . . . . . . . .. .. . . . . ... .. .. . . . . . . . . . . . . . . . . . n+1 x 1 n+2 (x+1) 1 . . . . . . . . . . . (S2 n+x+1)
. . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. .

. . . . .

. .. . .. . .. . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . ... . .. .. . .

. . . . .

. .. . .. . .. . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . ... . .. .. . .

. . . . .

(S  1 x+1) 1 ....  . ................................................. ................................................. . . . . .............................. x1, nx .................................................................................................. x, nx . . .. .. . .............................. . .   .... . . . . . . . . .. . . .. .. . . x 1 ... ... . . . . . . . . . . . . . . . . (S2 n+x+1)
. . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . .

(nx+ 1) 2

(S2 n+x) 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . n . . . . . . n+1 . . . . . . . . . .. . .. . .. . .

(S2 n+x+1) (nx+ 1) 2


. ................................................. ................................................. .

(S (S  1 x+1) 1 ....   1 x) 1 .   . . . . . . ............... ............... . ................................................. ................................................. ................................................. ................................................. . . .............................. x1, nx1.................................................................................................. x, nx1 ..................................................................................................x+1, nx1.............................. . . . .. .. .. .. .. . .............................. . .   ..... .   ..... .  . . . . . . . . . . . . . .. . . . .. .. .. . . .. x 1 . . . . . . (x+1) 1 . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . ... . .. .. . .

(nx) 2

(S2 n+x) 2

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. .. . . .

  ............... ............... . . . x+1, nx .............................. .. .................................................. ................................................. .. . .   ..... .  . . n . . .. . (x+1) 1 .. . . . . . . n+1 . . . (S2 n+x) 2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . n . . . . . . . n+1 . . . . . . . . . . ... . .. .. . . .

(S  1 x) 1

. . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . n . . . . . n+2 . . . . . . . . . . . .. . .. . .. . .

(nx+ 1) 2

(nx) 2

(nx) 2

. . .

. . .

. . . . . . . . .

. . . . ... . .. .. . . .

. . .

. . .

. . . . . . . . .

. . . . ... . .. .. . . .

. . .

Figure 11.7: State transition diagram for a reversible multidimensional system n servers, nite number of sources.

11.4

Finite source models

From the state transition diagram it is obvious that the above results can be generalized to services with nite number of sources as the reductions factors only depend on the bandwidth demand. In Fig. 11.7 we show the state transition diagram for a system with two nite-source trac streams. In Chap 12 we consider closed queueing networks, where the nodes are the queueing models described in this chapter. We include nite number of users in each chain by truncating the Poisson case. and not the nite source case as in Fig. 11.7. For Engset trac where stream j has Sj sources and the arrival rate for an idle source type j is j we get a good approximation if we replace Aj by (assuming j = 1): (Sj {nj,y (x dj ) + nj,l (x dj )}) j It will be investigated whether this is exact.

11.4. FINITE SOURCE MODELS

319

The initialization values are pj (x) = 0, x < 1. We should normalize the state probabilities in each step wrapping up We may truncate the state space. detailed states or some convolution with truncation?) Performance measures. Finite buer

320

CHAPTER 11. MULTI-SERVICE QUEUEING SYSTEMS

Chapter 12 Queueing networks


Many systems behave in such a way that a job achieves services from several successive nodes, i.e. once it has obtained service at one node, then it goes on to the next node. The total service demand is composed of service demands at several nodes. Hence, the system is a network of queues, a queueing network, where each individual queue is called a node. Examples of queueing networks are telecommunication systems, computer systems, packet switching networks, and Flexible Manufacturing Systems (FMS). The terms job, customer, source, messages and others are used synonymously. In queueing networks we dene the queue-length in a node as the total number of jobs in the node, including delayed and served jobs. In the same way we dene the waiting time as the total sojourn time, including both delay and service time. This si because the nodes in general operates as generalized processor sharing nodes, and not as classical non-sharing queueing systems (cf. Chap. 11 The aim of this chapter is to introduce the basic theory of queueing networks, illustrated by applications. Usually, the theory is considered as being rather complicated, which is mainly due to the large amount of parameters. In this chapter we shall give a simple introduction to general analytical queueing network models based on product forms. We also describe the convolution algorithm and the MVAalgorithm, illustrating the theory with examples. The theory of queueing networks is similar to the theory of multidimensional loss systems (Chap. 7). In Chap. 8 we considered multi-dimensional loss systems whereas in this chapter we are looking at networks of queueing systems.

322

CHAPTER 12. QUEUEING NETWORKS

12.1

Introduction to queueing networks

Queueing networks are classied as closed and open queueing networks. In closed queueing networks the number of customers is xed whereas in open queueing networks the number of customers is varying. Erlangs classical waiting system, M/M/n, is an example of an open queueing network with one node, whereas Palms machine/repair model with S terminals is a closed network with two nodes. If there are more than one type of customers, a network can be a mixed open and closed network. Since the departure process from one node is the arrival process at another node, we shall pay special attention to the departure process, in particular when it can modeled as a Poisson process. This was analyzed in Chap. 11, and we will review the results in the section on symmetric queueing systems (Sec. 12.2). The state of a queueing network is dened as the simultaneous distribution of number of customers in each node. If K denotes the total number of nodes, then the state is described by a vector p(x1 , x2 , . . . , xK ) where xk is the number of customers in node k (k = 1, 2, . . . , K). Frequently, the state space is very large and it is dicult to calculate the state probabilities by solving node balance equations. If every node is a reversible (symmetric) queueing system, for example a Jackson network (Sec. 12.3), then we will have product form. The state probabilities of networks with product form can be aggregated and detailed performance measures obtained by using the convolution algorithm (Sec. 12.5.1) or the MVAalgorithm (Sec. 12.5.2). Jackson networks can be generalized to BCMPnetworks (Sec. 12.6), where there are N types of customers. Customers of one specic type all belongs to a so-called chain. Fig. 12.1 illustrates an example of a queueing network with 4 chains. When the number of chains increases the state space increases, correspondingly, and only systems with a small number of chains or jobs can be exact calculated. In case of a multi-chain network, the state of each node becomes multi-dimensional (Chap. 11). Within a node we do not have product form between the chains. But the product form between nodes is maintained, and the convolution algorithm (Sec. 12.5.1) and the MVAalgorithm (Sec. 12.5.2) are applicable. A number of approximate algorithms for large networks are published in the literature.

12.2

Symmetric (reversible) queueing systems

In order to analyze queueing systems, it is important to know when the departure process of a queueing system is a Poisson process. The multi-service reversible queueing models dealt with in Chap. 11 all have this property, and the state probabilities are all given by the state probabilities of M/M/n with special cases for n = 1 and n = . We summarize the state probabilities for one service obtained in Chap. 9:

12.2. SYMMETRIC (REVERSIBLE) QUEUEING SYSTEMS

323

Figure 12.1: An example of a queueing network with four open chains. 1. M/M/n. This is Burkes theorem (Burke, 1956 [13]), which states, that the departure process of an M/M/nsystem is a Poisson process. The state probabilities are given by (9.2) or (11.38): x p(0) A , 0 x n, x! (12.1) p(x) = Ax p(0) , x n. n! nxn where A = /, and p(0) is given by (9.4). 2. IS = M/G/. IS is abbreviation for I nnite Server and this corresponds to the Poisson case (Sec. 4.2). From Sec. 3.6 we know that a random translation of the events of a Poisson process results in a new Poisson process. This model is denoted as a system with the queueing discipline IS, I nnite number of Servers. The state probabilities are given by the Poisson distribution (4.6): p(x) = p(0) where p(0) = eA . 3. M/G/1PS This is a single server queueing system with a general service time distribution and processor sharing. The state probabilities are the same as for M/M/1 (10.79)(n = 1 in (12.1): p(x) = p(0) Ax , x = 0, 1, 2, . . . , (12.4) where p(0) = 1 A. 4. M/G/nGPS This multi-server queueing system has the same state probabilities as M/M/n above (12.1). Ax , x! i = 0, 1, 2, . . . . (12.3)

324

CHAPTER 12. QUEUEING NETWORKS

5. M/G/1LCFS-PR (PR = Preemptive Resume). This system also has the same state probabilities as M/M/1 (12.4) with p(0) = 1 A. Above we have expressed all state probabilities by state zero as we later only need the relative state probabilities. Only these four queueing disciplines are easy to deal with in the theory of queueing networks. But for example also for Erlangs loss system, the departure process will be a Poisson process, if we include blocked customers. The above-mentioned reversible queueing systems are also called symmetric queueing systems as they are symmetric in time. Both the arrival process and the departure process are Poisson processes and the systems are reversible (Kelly, 1979 [68]). The process is called reversible because it looks the same way when we reverse the time (cf. when a movie is reversible it looks the same whether we play it forward or backward). Apart from M/M/n these symmetric queueing systems have the common feature that a customer is served immediately upon arrival.
Example 12.2.1: M/M/1 departure process At rst it may seem illogical that the departure process of M/M/1 with arrival rate and service rate is a Poisson process with rate . During busy periods (probability A = /) the departure process is a Poisson process with rate . When the system becomes idle (probability 1 A) the inter-departure time become an inhomogeneous Erlang-2 distribution with rate in the rst phase and rate in the second. In a phase diagram we may take the time intervals in reverse order so it looks like Fig. 2.11. From the decomposition principle of Cox-distributions it becomes obvious. A similar decomposition can be worked out for M/M/n. 2

12.3

Open networks: single chain

In 1957, J.R. Jackson who was working with production planning and manufacturing systems, published a paper with a theorem, now called Jacksons theorem (1957 [52]). He showed that a queueing network of M/M/n nodes has product form. Knowing Burkes theorem (1956 [13]), Jacksons result is obvious. Historically, the rst paper on queueing systems in series was by another Jackson, R.R.P. Jackson (1954 [51]). Theorem 12.1 Jacksons theorem: Consider an open queueing network with K nodes satisfying the following conditions: Structure: each node is an M/M/nqueueing system. Node k has nk servers, and the average service time is 1/k . Trac: jobs arrive from outside the system to node k according to a Poisson process with intensity k . Customers may also arrive to node k from other nodes.

12.3. OPEN NETWORKS: SINGLE CHAIN

325

Strategy: a job which has just nished his service at node j, is immediately transferred to node k with probability pjk or leaves the network with probability:
K

pjk .
k=1

A customer may visit the same node several times if pkk > 0. Flow balance equations: The total average arrival intensity k to node k is obtained by solving the ow balance equations:
K

k = k +
j=1

j pjk .

(12.5)

Let p(x1 , x2 , . . . , xK ) denote the state space probabilities under the assumption of statistical equilibrium, i.e. the probability that there is xk customers at node k. Furthermore, we assume; k = Ak < nk . k Then the state space probabilities are given by the product form:
K

(12.6)

p (x1 , x2 , . . . , xK ) =
k=1

pk (xk ) .

(12.7)

where for node k, pk (xk ) is the state probabilities of Erlangs M/M/n queueing system with arrival rate k and service rate k . The oered trac k /k to node k must be less than the capacity nk of the node to enter statistical equilibrium (12.6). The key point of Jacksons theorem is that each node can be considered independently of all other nodes and that the state probabilities are as for Erlangs delay system (Sec. 12.2). This simplies the calculation of the state space probabilities signicantly. The proof of the theorem was derived by Jackson in 1957 by showing that the solution satisfy the node balance equations under the assumption of statistical equilibrium. Jacksons rst model thus only deals with open queueing networks. In Jacksons second model (Jackson, 1963 [53]) the arrival intensity from outside:
K

=
j=1

(12.8)

may depend on the current number of customers in the network. Furthermore, k can depend on the number of customers at node k. In this way, we can model queueing networks which

326
... . ............................ ........................... . .

CHAPTER 12. QUEUEING NETWORKS


... ...... ..... ..... ...... ... .. .. .. . . .. . . . ..... .. ..... . .. . . ................. . ............... . . . . .. . 1 ......................................................... . . . .. .. . ... ... .. .. .......... ..........

... ...... ..... ..... ...... ... .. .. .. .. . .. . . . ... ...... . ..... . .. . . .. ................ . ............... . . 2 ......................................... . . . .. .. . ... ... .. .. .......... ..........

Figure 12.2: State transition diagram of an open queueing network consisting of two M/M/1 systems in series. are either closed, open, or mixed. In all three cases, the state probabilities have product form. The model by Gordon & Newell (1967 [36]), which is often cited in the literature, can be treated as a special case of Jacksons second model.

Example 12.3.1: Two M/M/1 nodes in series Fig. 12.2 shows an open queueing network of two M/M/1 nodes in series. The corresponding state transition diagram is given in Fig. 12.3. Clearly, the state transition diagram is not reversible: (between two neighbour states there is only ow in one direction, (cf. Sec. 7.2). If we solve the balance equations to obtain the state probabilities we nd that the solution can be written on a product form: p(x1 , x2 ) = p1 (x1 ) p2 (x2 ) , p(x1 , x2 ) = (1 A1 ) Ai (1 A2 ) Aj 1 2 ,

where A1 = /1 and A2 = /2 . The state probabilities can be expressed in a product form p(x1 , x2 ) = p1 (x1 ) p2 (x2 ), where p1 (x1 ) is the state probabilities for a M/M/1 system with oered trac Ai , and p2 (x2 ) is the state probabilities for a M/M/1 system with oered trac A2 . The state probabilities of Fig. 12.3 are identical to those of Fig. 12.4, which has local balance and product form. Thus it is possible to nd a system which is reversible and has the same state probabilities as the non-reversible system. There is regional but not local balance in Fig. 12.3. If we consider a square of four states, then to the outside world there will be balance, but internally there will be circulation via the diagonal state shift. 2

In queueing networks customers will often be looping, so that a customer may visit the same node several times. If we have a queueing network with looping customers, where the nodes are M/M/nsystems, then the arrival processes to the individual nodes are no more Poisson processes. Anyway, we may calculate the state probabilities as if the individual nodes are independent M/M/n systems. This is explained in the following example.

Example 12.3.2: Networks with feed back Feedback is for example introduced in Example 12.3.1 by letting a customer, which has just ended its service at node 2, return to node 1 with probability p21 (Fig. 12.2). With probability 1 p21 the customer leaves the system. The ow balance equations (12.5) gives the total arrival intensity to each node and p21 must be chosen such that both 1 /1 and 2 /2 are less than one. Letting 0 and p21 1 we notice that the total arrival process to node 1 is not a Poisson processes:

12.3. OPEN NETWORKS: SINGLE CHAIN

327

only rarely a new job will arrive, but once it has entered the system it will circulate very fast many times. The number of times it loops back will be geometrically distributed and the inter-arrival time is the sum of the two service times. I.e. when there is one (or more) customers in the system, then the arrival rate to each node will be relatively high, whereas the rate will be very low if there is no customers in the system. The arrival process will be bursty. The situation is similar to the decomposition of an exponential distribution into a weighted sum of Erlang-k distributions, with geometrical weight factors (Sec. 2.3.3). Instead of considering a single exponential inter-arrival distribution, we can decompose this into innitely many phases (Fig. 2.12) and consider each phase as an arrival. Hence, the arrival process has been transformed from a Poisson process to a process with bursty arrivals. The total service time will be exponentially distributed with rate 1 (1 p21 ), respectively 2 (1 p21 ). But the total service time is split up into phases which are interleaved by waiting times and service time at the other node. 2

03 2 02 2 01 2 00

1 1 1

13 2 12 2 11 2 10

1 1 1

23 2 22 2 21 2 20

1 1 1

33 2 32 2 31 2 30

1 1 1

43 2 42 2 41 2 40

Figure 12.3: State transition diagram for the open queueing network shown in Fig. 12.2. The diagram is nonreversible.

12.3.1

Kleinrocks independence assumption

Above we assume a job sample a new service time with rate i when the job arrives at node i, independent of the service time at other nodes. If we consider a real-life data network, then the packets will have the same length (for example in bytes), and therefore the same service time on all links and nodes of equal speed. The theory of queueing networks has to assume that a job samples a new service time in every node. This is a necessary assumption for the product form. This assumption was rst investigated by Kleinrock (1964 [73]), Many analysis show that turns out to be a good approximation to real systems.

328

CHAPTER 12. QUEUEING NETWORKS

Figure 12.4: State transition diagram for two independent M/M/1queueing systems with identical arrival intensity, but individual mean service times. The diagram is reversible.

12.4

Open networks: multiple chains

Dealing with open systems is easy. First we solve the ow balance equation (12.5) individually for each chain and obtain the arrival intensity for chain j to node k (j,k ). The state probabilities for a node are then given by (11.37). We still have product form between the nodes, i.e. the nodes are independent, and we can easily calculate any state probability explicitly.

12.5

Closed networks: single chain

Dealing with closed queueing networks is much more complicated. We are interested in the state probabilities dened by p(x1 , x2 , . . . , xk , . . . , xK ), where xk is the number of customers in node k (1 k K). With a xed number of jobs we dont know the true arrival rate to the nodes. If we choose (or know) the arrival rate to a single node, then by solving the ow balance equations we nd the relative arrival rate to all other nodes. Thus we can nd the relative trac to each node. To nd the true normalized arrival rate and trac, we have to nd the normalization constant for the whole network, which means that we have to add all state probabilities.

   

 





 



 







 

 



 

 

12.5. CLOSED NETWORKS: SINGLE CHAIN

329

12.5.1

Convolution algorithm

The number of states increases rapidly when the number of nodes and/or customers increases. In general, it is only possible to deal with small systems. The complexity is similar to that of multi dimensional loss systems (Chapter 7). We will now show how the convolution algorithm can be applied to closed queueing networks. The algorithm corresponds to the convolution algorithm for loss systems (Chapter 7). We consider a queueing network with K nodes and a single chain with S jobs. We assume that the queueing systems in each node are symmetric (Sec. 12.2). The algorithm has three steps: Step 1: Flow balance equations Let the arrival intensity to an arbitrary chosen reference node k be equal to some value k . By solving the ow balance equation (12.5) for the closed network we obtain the relative arrival rates k (1 k K) to all nodes. We then obtain the relative oered trac values k = k /k . Often we choose the above arrival intensity of the reference node so that the oered trac to this node becomes one. Step 2: State probabilities Consider each node as if it is isolated and has the oered random (PCT-I ) trac k (1 k K). Depending on the actual symmetric queueing system at node k, we nd the relative state probabilities qk (xk ) at node k. The state space will be limited by the total number of customers S: 0 xk S. Step 3: convolution Convolve the state probabilities of the nodes recursively. For example, for the rst two nodes we have: q12 = q1 q2 , (12.9) where q12 (x) =
i=0 x

q1 (i) q2 (x i),

x = 0, 1, . . . , S .

By convolution we reduce the number of nodes to two: The node we are interested in, and all other nodes aggregated into one node. When all nodes except node k have been convolved we have the nal convolution: q1,2,...,k...K = q1,2,...,k1,k+1,...K qk , (12.10)

During the last convolution we convolve two nodes: the aggregated node consisting of all nodes except node k, and node k, and we obtain the detailed performance measures of node k. By changing the order of convolution of the nodes we can obtain the performance measures of all other nodes. Since the total number of customers is xed (S) only state q1,2,...,K (S) exists in the total aggregated system and therefore this macro-state must have the probability one. We can then normalize all micro-state probabilities.

330

CHAPTER 12. QUEUEING NETWORKS


. .. ........................................................................................................................................................................................................................ . . . ..................................................................................................................................................................................................................... . . . . . . . . . . . . . . . . . . . . .......... . ........... . . . . .. . ... .. . ... . . .. . . . . . .. . . . . . . . ...... . ................ .. . .................... . ...... ...... ...... . . . . . . .. .. . . . . . 1............. ..... . . . . . . . . . . .. . .. . .. .. . . .. . . . ... .. .... .. . . .. . .. . .......... ........ . .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . . .. . .. . . .. . . . .. . .. . .. .. . . . . . . . . . . .. .. . . . . . . .. . . .. .... . . . . . . . ....... ..... . .. .... ...... . . . . .. . . . .. . . .. .. . . .. .. . .. . . . . . . . . . . .. . . . . . .. . . . . . . . . . . .................. . . . . . .... . .. . . .................... . .. . .. .. . . . ...... 1..................................... . . . . .. . . . . . .. .. . . . .. . .... . . . .... . .. . .. . . . .. . ... . . ... .... . .... . . . .. . ... . ............ ........ .. .... . .... . ........................... ........................... . . . . .. ............. . . ............. . . .... .. . . . . . . . .... . . . . . . . . . . . . . . .. . . . . . . . . . .. .... .. .. . .... . . . .. . . . . . . . . . . . . . . . . . . . . . ..... . . . . . . . . . ..... . . . . . . . . . . . . . . . . . . . . . . . . ......................................... . . . . . . .................... ........................................ . . . . . . . .................. ............................. .. . . .. . .. . .. . . ........................... . . . . . . . . . .. . . . . . . . . . . . . . . . . . 2 ... . . .. . . . . . . . . . . .. .. . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . .. . . ... . . ... . . . . . . . . . . . . . .. . . . . . . . ... . . .. ............ ......... . . ........................... ........................... . .. .. . . . . . . .. . . .. . . . . . .. . .. . . . . .. . . .. . . . . . . . .. . . .. . . . .. . . .. . . . . . .. . . . .. . . . . . .. . .. . . . .. . . .. . . . . . . .. . . .. . . . .. .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... . .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ......... . ........... . .. .... . .. .. ... .. . . .. .. .. . .. .. . . . . . . .. . . .. . ...... . .................. . . . . . .................... . .... . . .. . .. . 1..................... . . . . . .. .. .. .. ... ... ........... ...........

Node 1

Node 2

Figure 12.5: The machine/repair model as a closed queueing networks with two nodes. The terminals correspond to one ISnode, because the tasks always nd an idle terminal, whereas the CPU corresponds to an M/M/1node.

Example 12.5.1: Palms machine/repair model We consider the machine/repair model of Palm introduced in Sec. 9.6 as a closed queueing network (Fig. 12.5). There are S jobs and terminals one server (computer). The mean thinking time is 1 and the mean service time at the CPU is 1 . In queueing network terminology there are two 2 1 nodes: node one is the terminals, i.e. an M/G/ (actually it is an M/G/S system, but since the number of customers is limited to S it corresponds to an M/G/ system), and node two is the CPU, i.e. an M/M/1 system with service intensity 2 . We choose the relative arrival rate to node one equal to 1 and nd 2 = 1 = The relative load at node 1 and node 2 are 1 = /1 and 2 = /2 , respectively. We consider each node in isolation and obtain the state probabilities of each node, q1 (i) and q2 (j), as if the arrival processes are Poisson processes. By convolving q1 (x1 ) and q2 (x2 ) we get q12 (x), (0 x S), as shown in Table 12.1. The last term with S customers (an unnormalised probability) q12 (S) is made up from the terms:
S

q12 (S) =
i=0

q1 (i) q2 (S i)
2 1 i S S2 xi 2 + . . . 1 2 + . . . + 1 1 . 2! i! S!

S1 S = 1 2 + 1 2 +

12.5. CLOSED NETWORKS: SINGLE CHAIN State x 0 1 2 . . . x . . . S Node 1 q1 (x1 ) 1 1


2 1 2! . . . x 1 x! . . . S 1 S!

331 Queueing network q12 = q1 q2 1 1 + 2


2 2 + 1 2 + 2 1 2!

Node 2 q2 (x2 ) 1 2
2 2

. . .
x 2

. . . . . . . . . q12 (S)

. . .
S 2

Table 12.1: The convolution algorithm applied to Palms machine/repair model. Node 1 is an IS-system, and node two is an M/M/1-system (Example 12.5.1).
We know that this total has probability one, and from the individual contributions we identify the state probabilities of the two nodes. A simple rearranging yields:
S q12 (S) = 2 1 +

2!

+ +

S!

where =

1 2 = . 2 1

The probability that all terminals are thinking is identied as the last term q1 (S)q2 (0) (S terminals in node 1, zero terminals in node 2) normalized by the sum q12 (S):
S

p{x1 = S, x2 = 0} = 1+ +

+ + + 2! 3! S! which is Erlangs B-formula. Thus the result is in agreement with the result obtained in Sec. 9.6. We notice that appears with the same power in all terms of q1,2 (S) and thus corresponds to a constant which disappears when we normalize. 2

S!

= E1,S ( ) ,

Example 12.5.2: Central server system In 1971 J. P. Buzen introduced the central server model illustrated in Fig. 12.6 to model a multiprogrammed computer system with one CPU and a number of input/output channels (peripheral

332

CHAPTER 12. QUEUEING NETWORKS

units). The degree of multi-programming S describes the number of jobs processed simultaneously. The number of peripheral units is denoted by K 1 as shown in Fig. 12.6, which also shows the transition probabilities. Typically a job requires service hundreds of times, either by the central unit or by one of the peripherals units. We assume that when a job is nished it is immediately replaced by a new job. Hence S is constant. The service times are all exponentially distributed with intensity i (i = 1, . . . , K).
.. . ................................................................................................................................................................................................................................................. . .............................................................................................................................................................................................................................................. . . . . . . .. . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........... . ........... . . .. . . . . .. .. .. . . .. . . . . . . . . . . . . . . . . 12............ . . . .................... ..... ...... .. . . ..... . . . ................ . ............... . . . . . . ............ . . . ............. . . 2 ................... . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . ... . . . . ... .. . . . .. . .......... . . . . .......... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .................................................. . . . . . . . ........................................................................................... . . . . . . ........................................ . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ......... . . . .......... . . . . . . .. . . . . . . .. . . ... .. . . . . . .. . . . . . . . .. . . . . .. . . . . . .. . . . . . . . . 11 . . . . . . . . . . . . .......13...... . . . . . .................... .................... .. . . . . . . . . ................... ................... .. . . ..... .. . . . .... . . . .. . . . . .................. . . ....... ...... . . . . . .... ...... . . 3 ... . . . . . . . . .. . . . . . . .. .. .. . . .. . .. .. . .. . . . . . . .. . . .. ... . . . .. .. . . . ... .. . . .. .. . . . .. . . . . ...................................... ............................ . .. ........... .......... .. . .. . . .................. . . . . . .................... ................... . . . .. . .. . . ........................... .................... . .................................... . . . . . . . . . 1 ... . . . . . .. . . .. . . . . . . ... . ... . . . . ... ............ . . ........ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........ ........... . . . . .. .... ... . . . .. . . .. . . .. . .. . . . . . . . . . . . 1K........ .................... . .................. . . . . . .................... ................... . . . ............ . . .. . .. . .................... K ... . . . .. .. . ... .. ... ............ ...........

S circulating tasks

new tasks

CPU

I/O channels Figure 12.6: Central server queueing system consisting of one central server (CPU) and (K1) I/Ochannels. A xed number of tasks S are circulating in the system.
Buzen drew up a scheme to evaluate this system. The scheme is a special case of the convolution algorithm. Let us illustrate it by a case with S = 4 customers and K = 3 nodes and: 1 = 1 , 28 2 = 1 , 40 3 = 1 , 280

p11 = 0.1 , The relative loads become:

p12 = 0.7 ,

p13 = 0.2 .

1 = 1 ,

2 = 1 ,

3 = 2 .

If we apply the convolution algorithm we obtain the results shown in Table 12.2. The term q123 (4) is made up by: q123 (4) = 1 16 + 2 8 + 3 4 + 4 2 + 5 1 = 57 .

12.5. CLOSED NETWORKS: SINGLE CHAIN State i 0 1 2 3 4 Node 1 q1 (i) 1 1 1 1 1 Node 2 q2 (i) 1 1 1 1 1 Node 1*2 q12 = q1 q2 1 2 3 4 5 Node 3 q3 1 2 4 8 16 Queueing network q123 = (q1 q2 ) q3 1 4 11 26 57

333

Table 12.2: The convolution algorithm applied to the central server system.
Node 3 serves customers in all states except for state q3 (0) q12 (4) = 5. The utilization of node 3 is therefore a3 = 52/57. Based on the relative loads we now obtain the exact loads: a1 = 26 , 57 a2 = 26 , 57 a3 = 52 . 57

The average number of customers at node 3 is: L3 = {1 (4 2) + 2 (3 4) + 3 (2 8) + 4 (1 16)} / 57 , L3 = 144 . 57

By changing the order of convolution we get the average queue lengths L1 and L2 and ends up with: L1 = 42 , 57 L2 = 42 , 57 L3 = 144 . 57

The sum of all average queue lengths is of course equal to the number of customers S. Notice, that in queueing networks we dene the queue length as the total number of customers in the node, including customers being served. From the utilization and mean service time we nd the average number of customers nishing service per time unit at each node: 1 = 26 1 , 57 28 2 = 26 1 , 57 40 3 = 52 1 . 57 280

Applying Littles result we nally obtain the mean sojourn time Wk = Lk /k : W1 = 45.23 , W2 = 64.62 , W3 = 775.38 . 2

334

CHAPTER 12. QUEUEING NETWORKS

12.5.2

MVAalgorithm

The Mean Value Algorithm (MVA) is an algorithm for calculating performance measures of queueing networks where all nodes are single-server systems. It combines in an elegant way two main results in queueing theory: the arrival theorem (5.29) and Littles law (3.20). The algorithm was rst published by Lavenberg & Reiser (1980 [80]). We consider a queueing network with K nodes and S customers (all belonging to a single chain). We choose some value of the arrival rate to some node, for example 1 = 1 to node one. From the ow balance equations we nd the relative arrival rates to all other nodes. The relative load of node k is k = k sk . (k = 1, 2, . . . , K). The algorithm is recursive in number of customers as a network with S + 1 customers is evaluated from a network with S customers. Assume that the average number of customers at node k is Lk (S) where S is the total number of customers in the network. Obviously
K

Lk (S) = S .
k=1

(12.11)

The algorithm is recursive in two steps: Step 1: Arrival theorem Increase the number of customers from S to (S + 1). According to the arrival theorem, the (S + 1)th customer will see the system as a system with S customers in statistically equilibrium. Hence, the average sojourn time (waiting time + service time) at node k is: For M/M/1, M/G/1PS, and M/G/1LCFSPR: For M/G/:

Wk (S + 1) = {Lk (S) + 1} sk . Wk (S + 1) = sk .

where sk is the average service time in node k which has nk servers. As we only calculate mean waiting times, we may assume FCFS queueing discipline. Step 2: Littles theorem We apply Littles law (L = W ), which is valid for all systems in statistical equilibrium. For node k we have: Lk (S + 1) = c k Wk (S + 1) , where k is the relative arrival rate to node k. The normalizing constant c is obtained from the total number of customers::
K

Lk (S + 1) = S + 1 .
k=1

(12.12)

12.5. CLOSED NETWORKS: SINGLE CHAIN

335

By these two steps we have performed the recursion from S to (S + 1) customers. For S = 1 there will be no waiting time in the system and Wk (1) equals the average service time sk . Nodes with a limited number of servers (n > 1) can only be dealt with approximately by the MVAalgorithm, but are easy to deal with by the convolution algorithm.

Example 12.5.3: Central server model We apply the MVAalgorithm to the central server model (Example 12.5.2). The relative arrival rates are: 1 = 1 ,
Node 1 S = 1 W1 (1) L1 (1) L1 (1) S = 2 W1 (2) L1 (2) L1 (2) S = 3 W1 (3) L1 (3) L1 (3) S = 4 W1 (4) L1 (4) L1 (4) = = = = = = 28 W2 (1) c 1 28 L2 (1) 0.25 L2 (1) 1.25 28 W2 (2) c 1 1.25 28 L2 (2) 0.4545 L2 (2) = = = = = =

2 = 0.7 ,
Node 2

3 = 0.2 .
Node 3 40 W3 (1) c 0.7 40 L3 (1) 0.25 L3 (1) = = = = = = 280 c 0.2 280 0.50 1.50 280 c 0.2 1.50 280 1.0909

1.25 40 W3 (2) c 0.7 1.25 40 L3 (2) 0.4545 L3 (2)

= 1.4545 28 W2 (3) = c 1 1.4545 28 L2 (3) = 0.6154 L2 (3) = 1.6154 28 W2 (4) = c 1 1.6154 28 L2 (4) = 0.7368 L2 (4)

= 1.4545 40 W3 (3) = c 0.7 1.4545 40 L3 (3) = 0.6154 L3 (3) = 1.6154 40 W3 (4) = c 0.7 1.6154 40 L3 (4) = 0.7368 L3 (4)

= 2.0909 280 = c 0.2 2.0909 280 = 1.7692 = 2.7692 280 = c 0.2 2.7692 280 = 2.5263

Naturally, the result is identical to the one obtained with the convolution algorithm. The sojourn time at each node (using the original time unit): W1 (4) = 1.6154 28 = = 45.23 , 64.62 ,

W3 (4) = 2.7693 280 = 775.38 . 2 Example 12.5.4: MVA-algorithm applied to the machine/repair model We consider the machine/repair model with S sources, terminal thinking time A and CPU service time equal to one time unit. As mentioned in Sec. 9.6.2 this is equivalent to Erlangs loss system with S servers and oered trac A. It is also a closed queueing network with two nodes and S customers in one chain. If we apply the MVAalgorithm to this system, then we get the recursion formula for the ErlangB formula (4.29). The relative arrival rates are identical, as a customer

W2 (4) = 1.6154 40

336
alternatively visits node one and two: 1 = 2 = 1. Node 1 S = 1 W1 (1) = L1 (1) L1 (1) = =

CHAPTER 12. QUEUEING NETWORKS

Node 2 A W2 (1) = c 1 A L2 (1)


A 1+A

1 c11
1 1+A

= =

L2 (1)

S = 2 W1 (2) = L1 (2) L1 (2) ... S = x W1 (x) = L1 (x) L1 (x) = = = ... A

A W2 (2) = c 1 A L2 (2)
1+A 2 1+A+ A ! 2

1+ c 1 (1 + 2A ...

1 1+A

= =

1 1+A )

L2 (2)

1+A 2 1+A+ A ! 2

A W2 (x) = c A L2 (x) =

1 + L2 (x 1) c {1 + L2 (x 1)}

= A {1 Ex (A)} L2 (x)

= x A {1 Ex (A)}

We know that the queue-length at the terminals (node 1) is equal to the carried trac in the equivalent ErlangB system and that all other customers stay in the CPU (node 2). We thus have in general: From this we have the normalization constant c = 1 Ex (A) and we get for the (x+1)th customer: L1 (x + 1) + L2 (x + 1) = c A + c {1 + L2 (x)} x + 1 = c A + c {1 + x A (1 Ex )} Ex+1 = A Ex , x + 1 + A Ex

because we know c = 1 Ex+1 . This is just the recursion formula for the ErlangB formula.

12.6

BCMP multi-chain queueing networks

In 1975 the second model of Jackson was further generalised by Baskett, Chandy, Muntz and Palacios (1975 [4]). They showed that queueing networks with more than one type of customers also have product form, provided that: a) Each node is a symmetric (reversible) queueing system (cf. Sec. 12.2: Poisson arrival process Poisson departure process).

12.6. BCMP MULTI-CHAIN QUEUEING NETWORKS

337

b) The customers are classied into N chains. Each chain is characterized by its own mean service time sj and transition probabilities pj ik. A restriction applies if the queueing discipline at a node is a non-sharing M/M/n queueing system (including M/M/1): the average service time must be identical for all chains in a node. BCMPnetworks can be evaluated with the multi-dimensional convolution algorithm and the multidimensional MVA algorithm. Mixed queueing networks (open & closed) are calculated by rst calculating the trac load in each node from the open chains. This trac must be carried to enter statistical equilibrium. The capacity of the nodes are reduced by this trac, and the closed queueing network is calculated by the reduced capacity. So the main problem is to calculate closed networks. For this we have more algorithms among which the most important ones are convolution algorithm and the MVA (Mean V alue Algorithm) algorithm.

12.6.1

Convolution algorithm

The algorithm is essentially the same as in the single chain case: Step 1: Flow balance equations Consider each chain as if it is alone in the network. Find the relative load at each node by solving the ow balance equation (12.5). At an arbitrary reference node we assume the arrival rate is equal to one. For each chain we may choose a dierent node as reference node. For chain j in node k the relative arrival intensity j is obtained k from (we use the upper index to denote the chain):
K

j k where: K = number of nodes, N = number of chains,

=
i=1

pj j , i ik

j = 1, . . . , N ,

(12.13)

pj = the probability that a customer of chain j moves from node i to node k. ik We choose an arbitrary node as reference node, e.g. node 1, i.e. j = 1. The relative 1 load at node k due to customers of chain j is then:
j k = j sj k k

where sj = is the mean service time at node k for customers of chain j. Notice j is an k index, not a power.

338

CHAPTER 12. QUEUEING NETWORKS

Step 2: State probabilities Based on the relative loads found in step 1, we obtain the multi-dimensional state probabilities for each node (Sec. 11.2.5). Each node is considered in isolation and we truncate the state space according to the number of customers in each chain. For example for node k (1 k K): pk = pk (x1 , x2 , . . . , xN ) , 0 xj S j , j = 1, 2, . . . N ,

where Sj is the number of customers in chain j. Step 3: Convolution In order to nd the state probabilities of the total network, the state probabilities of each node are convolved together similar to the single chain case. The only dierence is that the convolution is multi-dimensional. When we perform the last convolution we may obtain the performance measures of the last node. Again, by changing the order of nodes, we can obtain the performance measures of all nodes. The total number of states increases rapidly. For example, if chain j has Sj customers, then the total number of states in each node becomes:
N

(Sj +1) .
j=1

(12.14)

The number of ways the customers can be distributed in a queueing network with K nodes and N chains with Sj customers in chain j is:
N

C=

C(Sj , kj )
j=1

(12.15)

where kj (1 kj < k) is the number of nodes visited by chain j and: C(Sj , kj ) = Sj + kj 1 kj 1 = Sj + kj 1 Sj . (12.16)

The algorithm is best illustrated with an example.

Example 12.6.1: Palms machine-repair model with two types of customers As seen in Example 12.5.1, this system can be modelled as a queueing network with two nodes. Node 1 corresponds to the terminals (machines) while node 2 is the CPU (repair man). Node 2 is a single server system whereas node 1 is modeled as an Innite Server IS-system. The number of customers in the chains are (S1 = 2, S2 = 3) and the mean service time in node k is sj . The k relative load of chain 1 is denoted by 1 in node 1 and by 2 in node 2. Similarly, the load af chain 2 is denoted by 1 , respectively 2 . Applying the convolution algorithm yields: Step 1.

12.6. BCMP MULTI-CHAIN QUEUEING NETWORKS


Chain 1: Relative load: Chain 2: Relative load: S1 = 2 customers 1 = 1 s1 , 1 S2 = 3 customers 1 = 2 s2 , 1

339

2 = 1 s1 . 2 2 = 2 s2 . 2

Step 2. For node 1 (IS) the relative state probabilities are (cf. 7.10):

q1 (0, 0) = 1 q1 (1, 0) = 1 q1 (2, 0) =


2 1 2

q1 (0, 2) = q1 (1, 2) = q1 (2, 2) = q1 (0, 3) = q1 (1, 3) = q1 (2, 3) =

2 1 2 2 1 1 2 2 2 1 1 4 3 1 6 3 1 1 6 2 3 1 1 12

q1 (0, 1) = 1 q1 (1, 1) = 1 1 q1 (2, 1) =


2 1 1 2

For node 2 (single server) (cf. 11.22) we get:

q2 (0, 0) = 1 q2 (1, 0) = 2
2 q2 (2, 0) = 2

2 q2 (0, 2) = 2 2 q2 (1, 2) = 3 2 2 2 2 q2 (2, 2) = 6 2 2 3 q2 (0, 3) = 2 3 q2 (1, 3) = 4 2 2 2 3 q2 (2, 3) = 10 2 2

q2 (0, 1) = 2 q2 (1, 1) = 2 2 2
2 q2 (2, 1) = 3 2 2

Step 3. Next we convolve the two nodes. We know that the total number of customers are (2, 3), i.e.

340
we are only interested in state (2, 3):

CHAPTER 12. QUEUEING NETWORKS

q12 (2, 3) = q1 (0, 0) q2 (2, 3) + q1 (1, 0) q2 (1, 3) + q1 (2, 0) q2 (0, 3) + q1 (0, 1) q2 (2, 2) + q1 (1, 1) q2 (1, 2) + q1 (2, 1) q2 (0, 2) + q1 (0, 2) q2 (2, 1) + q1 (1, 2) q2 (1, 1) + q1 (2, 2) q2 (0, 1) + q1 (0, 3) q2 (2, 0) + q1 (1, 3) q2 (1, 0) + q1 (2, 3) q2 (0, 0) Using the actual values yields:
2 3 q12 (2, 3) = + 1 10 2 2 3 + 1 4 2 2 2 2 + 1 6 2 2 2 1 1 2 2 2 2 1 1 2 2 2 2 3 1 2 2 6 2 3 1 1 1 12

2 1 3 2 2

2 + 1 1 3 2 2 +

+ + +

2 1 2 3 2 2 2 2 2 1 1 2 4 3 1 1 2 6

+ + +

Note that 1 and 2 together (chain 1) always appears in the second power whereas 1 and 2 (chain 2) appears in the third power corresponding to the number of customers in each chain. Because of this, only the relative loads are relevant, and the absolute probabilities are obtain by normalisation by dividing all the terms by q12 (2, 3). The detailed state probabilities are now easy to 2 3 obtain. Only in the state with the term (1 1 )/12 is the CPU (repair man) idle. If the two types of customers are identical the model simplies to Palms machine/repair model with 5 terminals. In this case we have: 1 2 3 E1,5 (x) = 12 1 1 . q12 (2, 3) Choosing 1 = 1 = and 2 = 2 = 1, yields:
1 12 2 3 1 1 q12 (2, 3)

5 /12 1 3 1 10 + 4 + 2 2 + 6 + 32 + 1 3 + 2 2 + 3 + 1 4 + 6 3 + 1 4 + 2 4 6

1 5 12

5 5! = , 2 3 4 5 1++ + + + 2 3! 4! 5! i.e. the ErlangB formula as expected.

12.7. OTHER ALGORITHMS FOR QUEUEING NETWORKS

341

12.7

Other algorithms for queueing networks

The MVAalgorithm is also applicable to queueing networks with more chains, when the nodes are single-server systems. During the last decade several algorithms have been published. An overview can be found in (Conway & Georganas, 1989 [16]). In general, exact algorithms are not applicable for bigger networks. Therefore, many approximative algorithms have been developed to deal with queueing networks of realistic size.

12.8

Complexity

Queueing networks has the same complexity as circuit switched networks with direct routing (Sec. 8.5 and Tab. 8.2). The state space of the network shown in Tab. 12.3 has the following number of states for every node (12.14:
N

(Si + 1) .
i=0

(12.17)

The worst case is when every chain consists of one customer. Then the number of states becomes 2S where S is the number of customers. Chain 1 2 ... N Node 1 11 12 ... 1N 2 21 22 ... 2N K K1 K2 ... KN ... Population Size S1 S2 ... SN

Table 12.3: The parameters of a queueing network with N chains, K nodes and i Si customers. The parameter jk denotes the load from customers of chain j in node k (cf. Tab. 8.2).

12.9

Optimal capacity allocation

We now consider a data transmission system with K nodes, which are independent single server queueing systems M/M/1 (Erlangs delay system with one server). The arrival process to node k is a Poisson process with intensity k messages (customers) per time unit, and the

342

CHAPTER 12. QUEUEING NETWORKS

message size is exponentially distributed with mean value 1/k [bits]. The capacity of node k is k [bits per time unit]. The mean service time becomes: s= 1 1/k = . k k k

So the mean service rate is k k and the mean sojourn time is given by (9.34): m1,k = 1 . k k k
K

We introduce the following linear restriction on the total capacity: F =


k=1

k .

(12.18)

For every allocation of capacity which satises (12.18), we have the following mean sojourn time for all messages (call average):
K

m1 =
k=1

k 1 , k k k
K

(12.19)

where: =

k .
k=1

(12.20)

By applying (10.55) we get the total mean service time: 1 = The total oered trac is then: A=
K

k=1

k 1 . k

(12.21)

Kleinrocks law for optimal capacity allocation (Kleinrock, 1964 [73]) reads: Theorem 12.2 Kleinrocks square root law: The optimal allocation of capacity which minimises m1 (and thus the total number of messages in all nodes) is: k = under the condition that: F >
k=1

. F

(12.22)

k + F (1 A) k
K

k /k
K i=1

i /i

(12.23)

k . k

(12.24)

12.9. OPTIMAL CAPACITY ALLOCATION Proof: This can be shown by introducing Lagrange multiplier and consider:
K

343

G = m1

k=1

k F

(12.25)

Minimum of G is obtained by choosing k as given in (12.23). With this optimal allocation we nd the mean sojourn time:
K k=1

m1 =

k /k

F (1 A)

(12.26)

This optimal allocation corresponds to that all nodes rst are allocated the necessary minimum capacity i /i . The remaining capacity (12.21):
K

k=1

i = F (1 A) i

(12.27)

is allocated among the nodes proportional the square root of the average ow k /k . If all messages have the same mean value (k = ), then we may consider dierent costs in the nodes under the restriction that a xed amount is available (Kleinrock, 1964 [73]).

344

CHAPTER 12. QUEUEING NETWORKS

Chapter 13 Trac measurements


Trac measurements are carried out in order to obtain quantitative information about the load on a system to be able to dimension the system. By trac measurements we understand any kind of collection of data on the trac loading a system. The system considered may be a physical system, for instance a computer, a telephone system, or the central laboratory of a hospital. It may also be a ctitious system. The collection of data in a computer simulation model corresponds to a trac measurements. Billing of telephone calls also corresponds to a trac measurement where the measuring unit used is an amount of money. The extension and type of measurements and the parameters (trac characteristics) measured must in each case be chosen in agreement with the demands, and in such a way that a minimum of technical and administrative eorts result in a maximum of information and benet. According to the nature of trac a measurement during a limited time interval corresponds to a registration of a certain realization of the trac process. A measurement is thus a sample of one or more random variables. By repeating the measurement we usually obtain a dierent value, and in general we are only able to state that the unknown parameter (the population parameter, for example the mean value of the carried trac) with a certain probability is within a certain interval, the condence interval. The full information is equal to the distribution function of the parameter. For practical purposes it is in general sucient to know the mean value and the variance, i.e. the distribution itself is of minor importance. In this chapter we shall focus upon the statistical foundation for estimating the reliability of a measurement, and only to a limited extent consider the technical background. As mentioned above the theory is also applicable to stochastic computer simulation models.

346

CHAPTER 13. TRAFFIC MEASUREMENTS

13.1

Measuring principles and methods

The technical possibilities for measuring are decisive for what is measured and how the measurements are carried out. The rst program controlled measuring equipment was developed at the Technical University of Denmark, and described in (Andersen & Hansen & Iversen, 1971 [2]). Any trac measurement upon a trac process, which is discrete in state and continuous in time can in principle be implemented by combining two fundamental operations: 1. Number of events: this may for example be the number of errors, number of call attempts, number of errors in a program, number of jobs to a computing center, etc. (cf. number representation, Sec. 3.1.1 ). 2. Time intervals: examples are conversation times, execution times of jobs in a computer, waiting times, etc. (cf. interval representation, Sec. 3.1.2). By combining these two operations we may obtain any characteristic of a trac process. The most important characteristic is the (carried) trac volume, i.e. the summation of all (number) holding times (interval) within a given measuring period. From a functional point of view all trac measuring methods can be divided into the following two classes: 1. Continuous measuring methods. 2. Discrete measuring methods.

13.1.1

Continuous measurements

In this case the measuring point is active and it activates the measuring equipment at the instant of the event. Even if the measuring method is continuous the result may be discrete.
Example 13.1.1: Measuring equipment: continuous time Examples of equipment operating according to the continuous principle are: (a) Electro-mechanical counters which are increased by one at the instant of an event. (b) Recording xy plotters connected to a point which is active during a connection. (c) Amp`re-hour meters, which integrate the power consumption during a measuring period. e When applied for trac volume measurements in old electro-mechanical exchanges every trunk is connected through a resistor of 9,6 k, which during occupation is connected between 48 volts and ground and thus consumes 5 mA. (d) Water meters which measure the water consumption of a household. 2

13.1. MEASURING PRINCIPLES AND METHODS

347

13.1.2

Discrete measurements

In this case the measuring point is passive, and the measuring equipment must itself test (poll) whether there have been changes at the measuring points (normally binary, on-o). This method is called the scanning method and the scanning is usually done at regular instants (constant = deterministic time intervals). All events which have taken place between two consecutive scanning instants are from a time point of view referred to the latter scanning instant, and are considered as taking place at this instant.
Example 13.1.2: Measuring equipment: discrete time Examples of equipment operating according to the discrete time principle are: (a) Call charging according to the Karlsson principle, where charging pulses are issued at regular time instants (distance depends upon the cost per time unit) to the meter of the subscriber, who has initiated the call. Each unit (step) corresponds to a certain amount of money. If we measure the duration of a call by its cost, then we observe a discrete distribution (0, 1, 2, . . . units). The method is named after S.A. Karlsson from Finland (Karlsson, 1937 [65]). In comparison with most other methods it requires a minimum of administration. (b) The carried trac on a trunk group of an electro-mechanical exchange is in practice measured according to the scanning principle. During one hour we observe the number of busy trunks 100 times (every 36 seconds) and add these numbers on a mechanical counter, which thus indicate the average carried trac with two decimals. By also counting the number of calls we can estimate the average holding time. (c) The scanning principle is particularly appropriate for implementation in digital systems. For example, the processor controlled equipment developed at DTU, Technical University of Denmark, in 1969 was able to test 1024 measuring points (e.g. relays in an electro-mechanical exchange, trunks or channels) within 5 milliseconds. The states of each measuring point (idle/busy or o/on) at the two latest scannings are stored in the computer memory, and by comparing the readings we are able to detect changes of state. A change of state 0 1 corresponds to start of an occupation and 1 0 corresponds to termination of an occupation (lastlook principle). The scannings are controlled by a clock. Therefore we may monitor every channel during time and measure time intervals and thus observe time distributions. Whereas the classical equipment (erlang-meters) mentioned above observes the trac process in the state space (vertical, number representation), then the program controlled equipment observes the trac process in time space (horizontal, interval representation), in discrete time. The amount of information is almost independent of the scanning interval as only state changes are stored (the time of a scanning is measured in an integral number of scanning intervals). 2

Measuring methods have had decisive inuence upon the way of thinking and the way of formulating and analyzing the statistical problems. The classical equipment operating in state space has implied that the statistical analyzes have been based upon state probabilities, i.e. basically birth and death processes. From a mathematically point of view these models have been rather complex (vertical measurements).

348

CHAPTER 13. TRAFFIC MEASUREMENTS

The following derivations are in comparison very elementary and even more general, and they are inspired by the operation in time space of the program controlled equipment. (Iversen, 1976 [41]) (horizontal measurements).

13.2

Theory of sampling

Let us assume we have a sample of n IID (Independent and Identically Distributed) observations {X1 , X2 , . . . , Xn } of a random variable with unknown nite mean value m1 and nite variance 2 (population parameters). The mean value and variance of the sample are dened as follows: 1 X = n
2 n

Xi
i=1 n

(13.1)

1 = n1

i=1

Xi2 n X 2

(13.2)

Both X and s2 are functions of a random variable and therefore also random variables, dened by a distribution we call the sample distribution. X is a central estimator of the unknown population mean value m1 , i.e.: E{X} = m1 (13.3) Furthermore, s2 /n is a central estimator of the unknown variance of the sample mean X, i.e.: 2 {X} = s2 /n. (13.4)

We describe the accuracy of an estimate of a sample parameter by means of a condence interval, which with a given probability species how the estimate is placed relatively to the unknown theoretical value. In our case the condence interval of the mean value becomes: X tn1,1/2 s2 n (13.5)

where tn1,1/2 is the upper (1 /2) percentile of the Students t-distribution with n 1 degrees of freedom. The probability that the condence interval includes the unknown theoretical mean value is equal to (1) and is called the level of condence. Some values of the Students t-distribution are given in Table 13.1. When n becomes large, then the Students t-distribution converges to the Normal distribution, and we may use the percentile of this distribution. The assumption of independence are fullled for measurements taken on dierent days, but for example not for successive measurements by the scanning method within a limited time interval, because the number of busy channels at a given instant will be correlated with the number of busy circuits in the previous and the next scanning. In

13.2. THEORY OF SAMPLING

349

Continuous trac process a b c d 4 3 2 1 0 Discrete trac process a b c d 4 3 2 1 0

Time

Scan

Scanning method a b c d 0 0 0 1 1
.. . .......... .......... .

1 0 0 1 2

.......... ......... . . .. .......... .......... . . .

1 1 1 1 4

. . .......... .......... .

1 1 0 1 3

.......... ......... . . ..

.......... ......... . . ..

1 0 0 0 1

.. . .......... .......... .

0 0 0 0 0

.. . .......... .......... .

. . .......... .......... .

1 0 1 0 2

. . .......... .......... . .......... ......... . . ..

0 1 1 0 2

0 1 1 0 2

. . .......... .......... .

0 1 0 0 1

Figure 13.1: Observation of a trac process by a continuous measuring method and by the scanning method with regular scanning intervals. By the scanning method it is sucient to observe the changes of state.

350 n = 10% 1 2 5 10 20 40 6.314 2.920 2.015 1.812 1.725 1.684 1.645

CHAPTER 13. TRAFFIC MEASUREMENTS = 5% 12.706 4.303 2.571 2.228 2.086 2.021 1.960 = 1% 63.657 9.925 4.032 3.169 2.845 2.704 2.576

Table 13.1: Percentiles of the Students t-distribution with n degrees of freedom. A specic value of corresponds to a probability mass /2 in both tails of the Students t-distribution. When n is large, then we may use the percentiles of the Normal distribution. the following sections we calculate the mean value and the variance of trac measurements during for example one hour. This aggregated value for a given day may then be used as a single observation in the formul above, where the number of observations typically will be the number of days, we measure.

Example 13.2.1: Condence interval for call congestion On a trunk group of 30 trunks (channels) we observe the outcome of 500 call attempts. This measurement is repeated 11 times, and we nd the following call congestion values (in percentage): {9.2, 3.6, 3.6, 2.0, 7.4, 2.2, 5.2, 5.4, 3.4, 2.0, 1.4} The total sum of the observations is 45.4 and the total of the squares of the observations is 247.88 . We nd (13.1) X = 4.1273 % and (13.2) s2 = 6.0502 (%)2 . At 95%level the condence interval becomes by using the t-values in Table 13.1: (2.475.78). It is noticed that the observations are obtained by simulating a PCTI trac of 25 erlang, which is oered to 30 channels. According to the Erlang Bformula the theoretical blocking probability is 5.2603 %. This value is within the condence interval. If we want to reduce the condence interval with a factor 10, then we have to do 100 times as many observations (cf. formula 13.5), i.e. 50,000 per measurements (sub-run). We carry out this simulation and observe a call congestion equal to 5.245 % and a condence interval (5.093 5.398). 2

13.3

Continuous measurements in an unlimited period

Measuring of time intervals by continuous measuring methods with no truncation of the measuring period are easy to deal with by the theory of sampling described in Sec. 13.2 above.

13.3. CONTINUOUS MEASUREMENTS IN AN UNLIMITED PERIOD

351

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. .......... ....... .. . ..

0 a: Unlimited measuring period

Time

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

.. .......... .......... .

0 b: Limited measuring period

Time

Figure 13.2: When analyzing trac measurements we distinguish between two cases: (a) Measurements in an unlimited time period. All calls initiated during the measuring period contributes with their total duration. (b) Measurements in a limited measuring period. All calls contribute with the portion of their holding times which are located inside the measuring period. In the gure the sections of the holding times contributing to the measurements are shown with full lines.

For a trac volume or a trac intensity we can apply the formul (2.82) and (2.84) for a stochastic sum. They are quite general, the only restriction being stochastic independence between X and N . In practice this means that the systems must be without congestion. In general we will have a few percentages of congestion and may still as worst case assume independence. By far the most important case is a Poisson arrival process with intensity . We then get a stochastic sum (Sec. 2.3.3). For the Poisson arrival process we have when we consider a time interval T :

2 m1,n = n = T

352 and therefore we nd:

CHAPTER 13. TRAFFIC MEASUREMENTS

m1,s = T m1,t
2 2 s = T m2 + t 1,t

= T m2,t = T m2 t , 1,t

(13.6)

where m2,t is the second (non-central) moment of the holding time distribution, and t is Palms form factor of the same distribution: =
2 t m2,t =1+ 2 m2 m1,t 1,t

(13.7)

The distribution of ST will in this case be a compound Poisson distribution (Feller, 1950 [32]). The formul correspond to a trac volume (e.g. erlang-hours). For most applications as dimensioning we are interested in the average number of occupied channels, i.e. the trac intensity (rate) = trac per time unit (m1,t = 1, = A), when we choose the mean holding time as time unit: m1,i = A
2 i =

(13.8) (13.9)

A t T

These formul are thus valid for arbitrary holding time distributions. The formul (13.8) and (13.9) are originally derived by C. Palm (1941 [91]). In (Rabe, 1949 [99]) the formul for the special cases t = 1 (constant holding time) and t = 2 (exponentially distributed holding times) were published. The above formul are valid for all calls arriving inside the interval T when measuring the total duration of all holding times regardless for how long time the stay (Fig. 13.2 a).

Example 13.3.1: Accuracy of a measurement We notice that we always obtain the correct mean value of the trac intensity (13.8). The variance, however, is proportional to the form factor t . For some common cases of holding time distributions we get the following variance of the trac intensity measured: Constant: Exponential distribution: Observed (Fig. 2.5):
2 i = 2 i = 2 i =

A , T A 2, T A 3.83 . T

13.4. SCANNING METHOD IN AN UNLIMITED TIME PERIOD

353

Observing telephone trac, we often nd that t is signicant larger than the value 2 (exponential distribution), which is presumed to be valid in many classical teletrac models (Fig. 2.5). Therefore, the accuracy of a measurement is lower than given in many tables. This, however, is compensated by the assumption that the systems are nonblocking. In a system with blocking the variance becomes smaller due to negative correlation between holding times and number of calls. 2

Example 13.3.2: Relative accuracy of a measurement The relative accuracy of a measurement is given by the ratio: S= i = m1,i t AT
1/2

= variation coecient.

From this we notice that if t = 4, then we have to measure twice as long a period to obtain the same reliability of a measurement as for the case of exponentially distributed holding times. 2

For a given time period we notice that the accuracy of the trac intensity when measuring a small trunk group is much larger than when measuring a large trunk group, because the accuracy only depends on the trac intensity A. When dimensioning a small trunk group, an error in the estimation of the trac of 10 % has much less inuence than the same percentage error on a large trunk group (Sec. 4.8.1). Therefore we measure the same time period on all trunk groups. In Fig. 13.5 the relative accuracy for a continuous measurement is given by the straight line h = 0.

13.4

Scanning method in an unlimited time period

In this section we only consider regular (constant) scanning intervals. The scanning principle is for example applied to trac measurements, call charging, numerical simulations, and processor control. By the scanning method we observe a discrete time distribution for the holding time which in real time usually is continuous. In practice we usually choose a constant distance h between scanning instants, and we nd the following relation between the observed time interval and the real time interval (g. 13.3):

Observed time 0h 1h 2h 3h ...

Real time 0 0 1 2 h1 h2 h3 h4 ... h h h h

354 Observed number of scans 5 4 3 2 1 0 0 1 2

CHAPTER 13. TRAFFIC MEASUREMENTS

Interval for the real time (scan) Figure 13.3: By the scanning method a continuous time interval is transformed into a discrete time interval. The transformation is not unique (cf. Sec. 13.4). We notice that there is overlap between the continuous time intervals, so that the discrete distribution cannot be obtained by a simple integration of the continuous time interval over a xed interval of length h. If the real holding times have a distribution function F (t), then it can be shown that we will observe the following discrete distribution (Iversen, 1976 [41]): 1 p(0) = h p(k) = 1 h
h

F (t) dt
0 h 0

(13.10)

{F (t + kh) F (t + (k 1)h)} dt ,

k = 1, 2, . . . .

(13.11)

Interpretation: The arrival time of the call is assumed to be independent of the scanning process. Therefore, the density function of the time interval from the call arrival instant to the rst scanning time is uniformly distributed and equal to (1/h) (Sec. 3.6.3). The probability of observing zero scanning instants during the call holding time is denoted by p(0) and is equal to the probability that the call terminates before the next scanning time. For at xed value of the holding time t this probability is equal to F (t)/h, and to obtain the total probability we integrate over all possible values t (0 t < h) and get (13.10). In a similar way we derive p(k) (13.11). By partial integration it can be shown that for any distribution function F (t) we will always observe the correct mean value: h
k=0

k p(k) =

t dF (t) .

(13.12)

When using Karlsson charging we will therefore always in the long run charge the correct amount.

13.4. SCANNING METHOD IN AN UNLIMITED TIME PERIOD

355

For exponential distributed holding time intervals, F (t) = 1 e t , we will observe a discrete distribution, Westerbergs distribution (Iversen, 1976 [41]): p(0) = 1 p(k) = 1 1 e h , h
2

(13.13) k = 1, 2, . . . (13.14)

1 1 e h h

e(k1) h ,

This distribution can be shown to have the following mean value and form factor: m1 = 1 , h (13.15)

The form factor is equal to one plus the square of the relative accuracy of the measurement. For a continuous measurement the form factor is 2. The contribution 2 is thus due to the inuence from the measuring principle. The form factor is a measure of accuracy of the measurements. Fig. 13.4 shows how the form factor of the observed holding time for exponentially distributed holding times depends on the length of the scanning interval (13.16). By continuous measurements we get an ordinary sample. By the scanning method we get a sample of a sample so that there is uncertainty both because of the measuring method and because of the limited sample size. Fig. 3.2 shows an example of the Westerberg distribution. It is in particular the zero class which deviates from what we would expect from a continuous exponential distribution. If 2 we insert the form factor in the expression for s (13.9), then we get by choosing the mean holding time as time unit m1,t = 1/ = 1 the following estimates of the trac intensity when using the scanning method: m1,i = A ,
2 i =

e h + 1 = h h 2. e 1

(13.16)

A T

eh + 1 eh 1

(13.17)

By the continuous measuring method the variance is 2A/T . This we also get now by letting h 0. Fig. 13.5 shows the relative accuracy of the measured trac volume, both for a continuous measurement (13.8) & (13.9) and for the scanning method (13.17). Formula (13.17) was derived by (Palm, 1941 [91]), but became only known when it was re-discovered by W.S. Hayward Jr. (1952 [38]).
Example 13.4.1: Billing principles Various principles are applied for charging (billing) of calls. In addition, the charging rate if usually

356

CHAPTER 13. TRAFFIC MEASUREMENTS

varied during the 24 hours to inuence the habits of the subscriber. Among the principles we may mention: (a) Fixed amount per call. This principle is often applied in manual systems for local calls (at rate). (b) Karlsson charging. This corresponds to the measuring principle dealt with in this section because the holding time is placed at random relative to the regular charging pulses. This principle has been applied in Denmark in the crossbar exchanges. (c) Modied Karlsson charging. We may for instance add an extra pulse at the start of the call. In digital systems in Denmark there is a xed fee per call in addition to a fee proportional with the duration of the call. (d) The start of the holding time is synchronized with the scanning process. This is for example applied for operator handled calls and in coin box telephones. 2

13.5

Numerical example

2 For a specic measurement we calculate m1,i and i . The deviation of the observed trac intensity from the theoretical correct value is approximately Normal distributed. Therefore, the unknown theoretical mean value will be within 95% of the calculated condence intervals (cf. Sec. 13.2): m1,i 1, 96 i (13.18)

2 The variance i is thus decisive for the accuracy of a measurement. To study which factors are of major importance, we make numerical calculations of some examples. All formul may easily be calculated on a pocket calculator.

Both examples presume PCTI trac, (i.e. Poisson arrival process and exponentially distributed holding times), trac intensity = 10 erlang, and mean holding time = 180 seconds, which is chosen as time unit. Example a: This corresponds to a classical trac measurement: Measuring period = 3600 sec = 20 time units = T . Scanning interval = 36 sec = 0.2 time units = h = 1/s . (100 observations) Example b: In this case we only scan once per mean holding time: Measuring period = 720 sec = 4 time units = T . Scanning interval = 180 sec = 1 time unit = h = 1/s . (4 observations) From Table 13.5 we can draw some general conclusions:

13.5. NUMERICAL EXAMPLE

357

. . . . . .. .. ... ... ... .. ... ... .. .. ... .. ... ... .. ... ... ... ... .. ... ... ... ... .. ... ... ... ... ... ... ... ... ... ...... ... ... ... ...... ... .. .. ... . ... ... ... ... . ... ... ... .. .. .. .. ... ... ... .. ... ... ... .. .. .. .. ... ... ... ... .. ... .. ... . .. .. ... ... ... ... ... .. ... ... ... ... ... ... .. .. .. . .. ... ... ... ... ... .. ... .. ... .. .. .. ... ... .. .. ... . ... .. ... ... ... .. ... ... .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . . . .. .. .. . .. ... ... .. ... ... . ... ... .. ... .. ... ... ... .. ... ... ... ... ... ... ... ... ... ... .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. .. .. ... ... ... ... . ... ... ... .... ... .. ... .. ... .. ... ... ... ... ... .. ... .. ... .... .. ... ... ... ... .. ... ... .. ... ... ... ... ... ... ... ... ... ... . .. .. ... ... ... ... ... ... .. ... ... ... ... ... .. ... ... ... ... .. ... .. .. ... ... .. .. ... ... .. .. ... ... .. ... ... .. ... .. ... ... ... ... ... ... ... .. ... . . . .. .. .. ... ... ... .. ... .. ... .. .. ... ... ... ... ... .. ... .. ... ... ... .. ... ... ... ... ... .. .. ... ... ... ... ... ... ... ... .. ... ... ... ... .. ... ... ... ... . . . . . . .. ... .. ... ... ... ... ... ... .. .. ... ... .. ... ... ... ... .. ... ... .. ... ... ... ... ... ... .. ... ... ... ... ... ... ... ... ... .. ... ... ... ... .. ... ... ... ... . . . . . . .. ... .. ... ... ... ... ... ... ... .. ... ... ... ... ... ... ... .. ... .. ... ... ... ... ... ... .. .. ... .. ... ... ... ... ... ... .. .. ... .. ... ... ... .. .. ... ... ... . . . . . . .. .. ... ... ... .. ... ... ... .. ... ... ... ... .. ... .. ... ... ... .. ... .. .. ... ... .. ... ... ... ... .. .. ... .. ... ... ... ... ... ... .. .. ... ... ... .. .. ... ... ... . . . . . . . .. .. ... ... ... ... .. ... ... ... .... .. ... .. ... .. ... ... .. ... ... .. .. . .. .. ... ... .. . .... .... .. .. .. .... ... ... ... ... .. ... ... .. ... ... .... .... ... .... .. ... .... . . . . . . .. ... .. ... .... .... .... .. .... .. ... ... .. .... .... ... .... .... .. ... .. ... ... .. . .... .... .... ... .. ... .. ... ... .... .... ... .. .... .... ... .. ... ... .... .... .. .... ... ... .... . . .. ... ... ... ... .. ... .. .... ... ... .... ... .... .... .... .. .. ... . .... .... ..... ..... ... ...... .. .... .... .. ..... ..... .. ... ... ........ . ... .. .... ...... ... ...... .. .. .. .................. .. ....... ................. .............. . . .. .. . .. .... ..... ..... .. .. .......... . .... . .. ......... ....... ..... ..................... .. ... .. ... .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. .. . .. ... .. .. .. .. .. .. .. . .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. ..

Formfactor

k=1

Scan interval [s1 ]

Figure 13.4: Form factor for exponentially distributed holding times which are observed by Erlang-k distributed scanning intervals in an unlimited measuring period. The case k = corresponds to regular (constant) scan intervals which transform the exponential distribution into Westerbergs distribution. The case k = 1 corresponds to exponentially distributed scan intervals (cf. the roulette simulation method). The case h = 0 corresponds to a continuous measurement. We notice that by regular scan intervals we loose almost no information if the scan interval is smaller than the mean holding time (chosen as time unit).

358 5 Relative accuracy of A

CHAPTER 13. TRAFFIC MEASUREMENTS

2 1 0.5

0.2 0.1 0.05

.... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... .... .... .... .... .... .... .... .. ... .. .... .... .... .... .. .. .... .... . ... .... .. .. .... .... .... .... ... .. . ....... .... .... .... .... .... ... ... .... .... .... .... .... .... ...... ... .... .... .... .... .... ...... ... ..... .... .... .... .... .... ..... ... ..... .... .... .... .... .... .... .... ......... .... ... ..... .... .... .... .... ........ ........ .... . . ... .... .... .... ....... .... ... ... .... .... .... . ... .... .... .... ....... .... ....... .. . .... .... .... .... ....... . ....... . .. .... .... .... .... ....... .... ...... .... .... ... . .... .... ... .... .... ... .... ..... ..... .... .... .... .... .... ... .... .... .... ..... .... ..... .... .... .... .... .... .... .... .... ... .... ... .... .... ..... .... .... .... .... .... .... .... ..... .... ..... .... .... .... ... .... ... .... .... .... .... .... .... .... .... .... .... .... .... ... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .. .... . .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .. . .... .... .... .... .... .... ... ...

A=1

h 6 3 0

0.02 1 2 5 10 20 50 100 200 500 AT Trac volume [s]

Figure 13.5: Using double-logarithmic scale we obtain a linear relationship between the relative accuracy of the trac intensity A and the measured trac volume A T when measuring in an unlimited time period. A scan interval h = 0 corresponds to a continuous measurement and h > 0 corresponds to the scanning method. The inuence of a limited measuring method is shown by the dotted line for the case A = 1 erlang and a continuous measurement taking account of the limited measuring interval. T is measured in mean holding times. By the scanning method we loose very little information as compared to a continuous measurement as long as the scanning interval is less than the mean holding time (cf. Fig. 13.4). A continuous measurement can be considered as an optimal reference for any discrete method. Exploitation of knowledge about a limited measuring period results in more information for a short measurement (T < 5), whereas we obtain little additional information for T > 10. (There is correlation in the trac process, and the rst part of a measuring period yields more information than later parts). By using the roulette method we loose of course more information than by the scanning method (Iversen 1976, [41], 1977 [42]). All the above mentioned factors have far less inuence than the fact that the real holding times often deviate from the exponential distribution. In practice we often observe a form

13.5. NUMERICAL EXAMPLE Example a


2 i

359 Example b
2 i

i 2.2361 1.9424 2.3259 2.0688 2.7386 2.5046

Continuous Method Unlimited (13.8) Limited Scanning Method Unlimited (13.17) Limited Roulette Method Unlimited Limited

1.0000 0.9500 1.0033 0.9535 1.1000 1.0500

1.0000 5.0000 0.9747 3.7729 1.0016 5.4099 0.9765 4.2801 1.0488 7.5000 1.0247 6.2729

Table 13.2: Numerical comparison of various measuring principles in dierent time intervals. factor about 46. The conclusion to be made from the above examples is that for practical applications it is more relevant to apply the elementary formula (13.8) with a correct form factor than to take account of the measuring method and the measuring period. The above theory is exact when we consider charging of calls and measuring of time intervals. For stochastic computer simulations the trac process in usually stationary, and the theory can be applied for estimation of the reliability of the results. However, the results are approximate as the theoretical assumptions about congestion free systems seldom are of interest. In real life measurements on working systems we have trac variations during the day, technical errors, measuring errors etc. Some of these factors compensate each other and the results we have derived give a good estimate of the reliability, and it is a good basis for comparing dierent measurements and measuring principles.

360

CHAPTER 13. TRAFFIC MEASUREMENTS

BIBLIOGRAPHY

361

Bibliography
[1] Abate, J. & Whitt, W. (1997): Limits and approximations for the M/G/1 LIFO waitingtime distribution. Operations Research Letters, Vol. 20 (1997) : 5, 199206. [2] Andersen, B. & Hansen, N.H. & og Iversen, V.B. (1971): Use of minicomputer for telephone trac measurements. Teleteknik (Engl. ed.) Vol. 15 (1971) : 2, 3346. [3] Ash, G.R. (1998): Dynamic routing in telecommunications networks. McGraw-Hill 1998. 746 pp. [4] Baskett, F. & Chandy, K.M. & Muntz, R.R. & Palacios, F.G. (1975): Open, closed and mixed networks of queues with dierent classes of customers. Journal of the ACM, April 1975, pp. 248260. (BCMP queueing networks). [5] Bear, D. (1988): Principles of telecommunication trac engineering. Revised 3rd Edition. Peter Peregrinus Ltd, Stevenage 1988. 250 pp. [6] Bech, N.I. (1954): A method of computing the loss in alternative trunking and grading systems. The Copenhagen Telephone Company, May 1955. 14 pp. Translated from Danish: Metode til beregning af sprring i alternativ trunking- og graderingssystemer. Teleteknik, Vol. 5 (1954) : 4, pp. 435448. [7] Bolotin, V.A. (1994): Telephone circuit holding time distributions. ITC 14, 14th International Teletrac Congress. Antibes Juan-les-Pins, France, June 6-10. 1994. Proccedings pp. 125134. Elsevier 1994. [8] Bretschneider, G. (1956): Bie Berechnung von Leitungsgruppen fr uberieenden u Verkehr. Nachrichtentechnische Zeitschrift, NTZ, Vol. 9 (1956) : 11, 533540. [9] Bretschneider, G. (1973): Extension of the equivalent random method to smooth trafcs. ITC7, Seventh International Teletrac Congress, Stockholm, June 1973. Proceedings, paper 411. 9 pp. [10] Brockmeyer, E. (1957): A Survey of TracMeasuring Methods in the Copenhagen Exchanges. Teleteknik (Engl. ed.) 1957:1, pp. 92105. [11] Brockmeyer, E. (1954): The simple overow problem in the theory of telephone trac. Teleteknik 1954, pp. 361374. In Danish. English translation by Copenhagen Telephone Company, April 1955. 15 pp. [12] Brockmeyer, E. & Halstrm, H.L. & Jensen, Arne (1948): The life and works of A.K. Erlang. Transactions of the Danish Academy of Technical Sciences, 1948, No. 2, 277 pp. Copenhagen 1948. [13] Burke, P.J. (1956): The output of a queueing system. Operations Research, Vol. 4 (1956), 699704.

362

BIBLIOGRAPHY

[14] Christensen, P.V. (1914): The number of selectors in automatic telephone systems. The Post Oce Electrical Engineers Journal, Vol. 7 (1914), 271281. [15] Cobham, A. (1954): Priority assignment in waiting line problems. Operations Research, Vol. 2 (1954), 7076. [16] Conway, A.E. & Georganas, N.D. (1989): Queueing networks exact computational algorithms: A unied theory based on decomposition and aggregation. The MIT Press 1989. 234 pp. [17] Cooper, R.B. (1972): Introduction to queueing theory. New York 1972. 277 pp. [18] Cox, D.R. (1955): A use of complex probabilities in the theory of stochastic processes. Proc. Camb. Phil. Soc., Vol. 51 (1955), pp. 313319. [19] Cox, D.R. & Miller, H.D. (1965): The theory of stochastic processes. Methuen & Co. London 1965. 398 pp. [20] Cox, D.R.& Isham, V. (1980): Point processes. Chapman and Hall. 1980. 188 pp. [21] Crommelin, C.D. (1932): Delay probability formulae when the holding times are constant. Post Oce Electrical Engineers Journal, Vol. 25 (1932), pp. 4150. [22] Crommelin, C.D. (1934): Delay probability formulae. Post Oce Electrical Engineers Journal, Vol. 26 (1934), pp. 266274. [23] Delbrouck, L.E.N. (1983): On the steadystate distribution in a service facility carrying mixtures of trac with dierent peakedness factor and capacity requirements. IEEE Transactions on Communications, Vol. COM31 (1983) : 11, 12091211. [24] Dickmeiss, A. & Larsen, M. (1993): Sprringsberegninger i telenet (Blocking calculations in telecommunication networks, in Danish). Masters thesis. Institut for Telekommunikation, Danmarks Tekniske Hjskole, 1993. 141 pp. [25] Eilon, S. (1969): A simpler proof of L = W . Operations Research, Vol. 17 (1969), pp. 915917. [26] Elldin, A., and G. Lind (1964): Elementary telephone trac theory. Chapter 4. L.M. Ericsson AB, Stockholm 1964. 46 pp. [27] Engset, T.O. (1915): Om beregning av vlgere i et automatisk telefonsystem, en underskelse angaaende punkter i grundlaget for sandsynlighetsteoriens anvendelse paa bestemmelse av de automatiske centralinretningers omfang. Kristiania (Oslo) 1915. 128 pp. English version: On the calculation of switches in an automatic telephone system. Telektronikk, Vol. 94 (1998) :2 , 99142. [28] Engset, T.O. (1918): Die Wahrscheinlichkeitsrechnung zur Bestimmung der Whlerzahl a in automatischen Fernsprechmtern. Elektrotechnische Zeitschrift, 1918, Heft 31. Transa lated to English in Telektronikk (Norwegian), June 1991, 4pp.

BIBLIOGRAPHY

363

[29] Erlang, A.K. (1909): The Theory of Probabilities and Telephone Conversations. Nyt Matematisk Tidsskrift, B, Vol. 20, pp. 3340 (in Danish). English translation: The Life and Works of A.K. Erlang, E. Brockmeyer, H.L. Halstrm og Arne Jensen, pp. 131137.. Copenhagen 1948 [30] Esteves, J.S. & Craveirinha, J. & Cardoso, D. (1995): Computing ErlangB Function Derivatives in the Number of Servers. Communications in Statistics Stochastic Models, Vol. 11 (1995) : 2, 311331. [31] Farmer, R.F. & Kaufman, I. (1978): On the Numerical Evaluation of Some Basic Trac Formulae. Networks, Vol. 8 (1978) 153186. [32] Feller, W. (1950): An introduction to probability theory and its applications. Vol. 1, New York 1950. 461 pp. [33] Fortet, R. & Grandjean, Ch. (1964): Congestion in a loss system when some calls want several devices simultaneously. Electrical Communications, Vol. 39 (1964) : 4, 513526. Paper presented at ITC4, Fourth International Teletrac Congress, London. England, 1521 July 1964. [34] Fredericks, A.A. (1980): Congestion in blocking systems a simple approximation technique. The Bell System Technical Journal, Vol. 59 (1980) : 6, 805827. [35] Fry, T.C. (1928): Probability and its Engineering Uses. New York 1928, 470 pp. [36] Gordon, W.J., and & Newell, G.F. (1967): Closed queueing systems with exponential servers. Operations Research, Vol. 15 (1967), pp. 254265. [37] Grillo, D. & Skoog, R.A. & Chia, S. & Leung, K.K. (1998): Teletrac engineering for mobile personal communications in ITUT work: the need to match theory to practice. IEEE Personal Communications, Vol. 5 (1998) : 6, 3858. [38] Hayward, W.S. Jr. (1952): The reliability of telephone trac load measurements by switch counts. The Bell System Technical Journal, Vol. 31 (1952) : 2, 357377. [39] ITU-T (1993): Trac intensity unit. ITUT Recommendation B.18. 1993. 1 p. [40] Iversen, V.B. (1973): Analysis of real teletrac processes based on computerized measurements. Ericsson Technics, No. 1, 1973, pp. 164. Holbk measurements. [41] Iversen, V.B. (1976): On the accuracy in measurements of time intervals and trac intensities with application to teletrac and simulation. Ph.D.thesis. IMSOR, Technical University of Denmark 1976. 202 pp. [42] Iversen, V.B. (1976): On general point processes in teletrac theory with applications to measurements and simulation. ITC-8, Eighth International Teletrac Congress, paper 312/18. Melbourne 1976. Published in Teleteknik (Engl. ed.) 1977 : 2, pp. 5970. [43] Iversen, V.B. (1980): The Aformula. Teleteknik (English ed.), Vol. 23 (1980) : 2, 6479.

364

BIBLIOGRAPHY

[44] Iversen, V.B. (1982): Exact calculation of waiting time distributions in queueing systems with constant holding times. NTS-4, Fourth Nordic Teletrac Seminar, Helsinki 1982. 31 pp. [45] Iversen, V.B. (1987): The exact evaluation of multiservice loss system with access control. Teleteknik, English ed., Vol 31 (1987) : 2, 5661. NTS7, Seventh Nordic Teletrac Seminar, Lund, Sweden, August 2527, 1987, 22 pp. [46] Iversen, V.B. & Nielsen, B.F. (1985): Some properties of Coxian distributions with applications. Proceedings of the International Conference on Modelling Techniques and Tools for Performance Analysis, pp. 6166. 57 June, 1985, Valbonne, France. North Holland Publ. Co. 1985. 365 pp. (Editor N. Abu El Ata). [47] Iversen, V.B. & Stepanov, S.N. (1997): The usage of convolution algorithm with truncation for estimation of individual blocking probabilities in circuit-switched telecommunication networks. Proceedings of the 15th International Teletrac Congress, ITC 15, Washington, DC, USA, 2227 June 1997. 13271336. [48] Iversen, V.B. & Sanders, B. (2001): Engset formul with continuous parameters the ory and applications. AEU, International Journal of Electronics and Communications, Vol. 55 (2001) : 1, 3-9. [49] Iversen, V.B. (2005): Algorithm for evaluating multi-rate loss systems. COM Department, Technical University of Denmark. December 2005. 27 pp. Submitted for publication. [50] Iversen B.B. (2007): Reversible fair scheduling: the teletrac revisited. Proceedings from 20th International Teletrac Congress, ITC20, Ottawa, Canada, June 17-21, 2007. Springer Lecture Notes in Computer Science. Vol. LNCS 4516 (2007), pp. 1135-1148. [51] Jackson, R.R.P. (1954): Queueing systems with phase type service. Operational Research Quarterly, Vol. 5 (1954), 109120. [52] Jackson, J.R. (1957): Networks of waiting lines. Operations Research, Vol. 5 (1957), pp. 518521. [53] Jackson, J.R. (1963): Jobshoplike queueing systems. Management Science, Vol. 10 (1963), No. 1, pp. 131142. [54] Jagerman, D.L. (1984): Methods in Trac Calculations. AT&T Bell Laboratories Technical Journal, Vol. 63 (1984) : 7, 12831310. [55] Jagers, A.A. & van Doorn, E.A. (1986): On the Continued Erlang Loss Function. Operations Research Letters, Vol. 5 (1986) : 1, 4346. [56] Jensen, Arne (1948): An elucidation of A.K. Erlangs statistical works through the theory of stochastic processes. Published in The Erlangbook: E. Brockmeyer, H.L. Halstrm and A. Jensen: The life and works of A.K. Erlang. Kbenhavn 1948, pp. 23100.

BIBLIOGRAPHY

365

[57] Jensen, Arne (1948): Truncated multidimensional distributions. Pages 5870 in The Life and Works of A.K. Erlang. Ref. Brockmeyer et al., 1948 [56]. [58] Jensen, Arne (1950): Moes Principle An econometric investigation intended as an aid in dimensioning and managing telephone plant. Theory and Tables. Copenhagen 1950. 165 pp. [59] Jerkins, J.L. & Neidhardt, A.L. & Wang, J.L. & Erramilli A. (1999): Operations measurement for engineering support of high-speed networks with self-similar trac. ITC 16, 16th International Teletrac Congress, Edinburgh, June 711, 1999. Proceedings pp. 895906. Elsevier 1999. [60] Johannsen, Fr. (1908): Busy. Copenhagen 1908. 4 pp. [61] Johansen, K. & Johansen, J. & Rasmussen, C. (1991): The broadband multiplexer, TransMux 1001. Teleteknik, English ed., Vol. 34 (1991) : 1, 5765. [62] Joys, L.A.: Variations of the Erlang, Engset and Jacobus formul. ITC5, Fifth International Teletrac Congress, New York, USA, 1967, pp. 107111. Also published in: Teleteknik, (English edition), Vol. 11 (1967) :1 , 4248. [63] Joys, L.A. (1968): Engsets formler for sannsynlighetstetthet og dens rekursionsformler. (Engsets formul for probability and its recursive formul, in Norwegian). Telektronikk 1968 No 12, pp. 5463. [64] Joys, L.A. (1971): Comments on the Engset and Erlang formulae for telephone trac losses. Thesis. Report TF No. 25/71, Research Establishment, The Norwegian Telecommunications Administration. 1971. 127 pp. [65] Karlsson, S.A. (1937): Tekniska anordninger fr samtalsdebitering enligt tid (Technio cal arrangement for charging calls according to time, In Swedish). Helsingfors Telefonfrening, Tekniska Meddelanden 1937, No. 2, pp. 3248. o [66] Kaufman, J.S. (1981): Blocking in a shared resource environment. IEEE Transactions on Communications, Vol. COM29 (1981) : 10, 14741481. [67] Keilson, J. (1966): The ergodic queue length distribution for queueing systems with nite capacity. Journal of Royal Statistical Society, Series B, Vol. 28 (1966), 190201. [68] Kelly, F.P. (1979): Reversibility and stochastic networks. John Wiley & Sons, 1979. 230 pp. [69] Kendall, D.G. (1951): Some problems in the theory of queues. Journal of Royal Statistical Society, Series B, Vol. 13 (1951) : 2, 151173. [70] Kendall, D.G. (1953): Stochastic processes occuring in the theory of queues and their analysis by the method of the imbedded Markov chain. Ann. Math. Stat., Vol. 24 (1953), 338354.

366

BIBLIOGRAPHY

[71] Khintchine, A.Y. (1955): Mathematical methods in the theory of queueing. London 1960. 124 pp. (Original in Russian, 1955). [72] Kingman, J.F.C. (1969): Markov population processes. J. Appl. Prob., Vol. 6 (1969), 118. [73] Kleinrock, L. (1964): Communication nets: Stochastic message ow and delay. McGrawHill 1964. Reprinted by Dover Publications 1972. 209 pp. [74] Kleinrock, L. (1975): Queueing systems. Vol. I: Theory. New York 1975. 417 pp. [75] Kleinrock, L. (1976): Queueing systems. Vol. II: Computer applications. New York 1976. 549 pp. [76] Kosten, L. (1937): Uber Sperrungswahrscheinlichkeiten bei Staelschaltungen. Elek. Nachr. Techn., Vol. 14 (1937) 512. [77] Kruithof, J. (1937): Telefoonverkehrsrekening. De Ingenieur, Vol. 52 (1937) : E15E25. [78] Kuczura, A. (1973): The interrupted Poisson process as an overow process. The Bell System Technical Journal, Vol. 52 (1973) : 3, pp. 437448. [79] Kuczura, A. (1977): A method of moments for the analysis of a switched communication networks performance. IEEE Transactions on Communications, Vol. Com25 (1977) : 2, 185193. [80] Lavenberg, S.S. & Reiser, M. (1980): Meanvalue analysis of closed multichain queueing networks. Journal of the Association for Computing Machinery, Vol. 27 (1980) : 2, 313 322. [81] Lvy-Soussan, G. (1968): Numerical Evaluation of the Erlang Function through a e Continued-Fraction Algorithm. Electrical Communication, Vol. 43 (1968) : 2, 163168. [82] Lind, G. (1976): Studies on the probability of a called subscriber being busy. ITC8, Eighth International Teletrac Congress, Melbourne, November 1976. Paper 631. 8 pp. [83] ListovSaabye, H. & Iversen V.B. (1989): ATMOS: a PCbased tool for evaluating multiservice telephone systems. IMSOR, Technical University of Denmark 1989, 75 pp. (In Danish). [84] Little, J.D.C. (1961): A proof for the queueing formula L = W . Operations Research, Vol. 9 (1961) : 383387. [85] Maral, G. (1995): VSAT networks. John Wiley & Sons, 1995. 282 pp. [86] Marchal, W.G. (1976): An approximate formula for waiting time in single server queues. AIIE Transactions, December 1976, 473474. [87] Mejlbro, L. (1994): Approximations for the Erlang Loss Function. Technical University of Denmark 1994. 32 pp. NTS14, Copenhagen 1820 August 1998. Proceedings pp. 90102. Department of Telecommunication, Technical University of Denmark.

BIBLIOGRAPHY

367

[88] Messerli, E.J. (1972): Proof of a Convexity Property of the Erlang B Formula. The Bell System Technical Journal, Vol. 51 (1972) 951953. [89] Molina, E.C. (1922): The Theory of Probability Applied to Telephone Trunking Problems. The Bell System Technical Journal, Vol. 1 (1922) : 2, 6981. [90] Molina, E.C. (1927): Application of the Theory of Probability to Telephone Trunking Problems. The Bell System Technical Journal, Vol. 6 (1927) 461494. [91] Palm, C. (1941): Mttnoggrannhet vid bestmning af trakmngd enligt genomska a a o ningsfrfarandet (Accuracy of measurements in determining trac volumes by the scano ning method). Tekn. Medd. K. Telegr. Styr., 1941, No. 79, pp. 97115. [92] Palm, C. (1943): Intensittsschwankungen im Fernsprechverkehr. Ericsson Technics, a No. 44, 1943, 189 pp. English translation by Chr. Jacobus: Intensity Variations in Telephone Trac. NorthHolland Publ. Co. 1987. [93] Palm, C. (1947): The assignment of workers in servicing automatic machines. Journal of Industrial Engineering, Vol. 9 (1958) : 2842. First published in Swedish in 1947. [94] Palm, C. (1947): Table of the Erlang loss formula. Telefonaktiebolaget L M Ericsson, Stockholm 1947. 23 pp. [95] Palm, C. (1957): Some propositions regarding at and steep distribution functions, pp. 317 in TELE (English edition), No. 1, 1957. [96] Panken, F.J.M. & van Doorn, E.A.: Arrivals in a loss system with arrivals in geometrically distributed batches and heterogeneous service requirements. IEEE/ACM Trans. on Networking, vol. 1 (1993) : 6, 664667. [97] PostigoBoix, M. & Garc aHaro, J. & AguilarIgartua, M. (2001): (Inverse Multiplexing of ATM) IMA technical foundations, application and performance analysis. Computer Networks, Vol. 35 (2001) 165183. [98] Press, W.H. & Teukolsky, S.A. & Vetterling, W.T. & Flannery, B.P. (1995): Numerical recipes in C, the art of scientic computing. 2nd edition. Cambridge University Press, 1995. 994 pp. [99] Rabe, F.W. (1949): Variations of telephone trac. Electrical Communications, Vol. 26 (1949) 243248. [100] Rapp, Y. (1964): Planning of junction network in a multiexchange area. Ericsson Technics 1964, pp. 77130. [101] Rapp, Y. (1965): Planning of junction network in a multiexchange area. Ericsson Technics 1965, No. 2, pp. 187240. [102] Riordan, J. (1956): Derivation of moments of overow trac. Appendix 1 (pp. 507514) in (Wilkinson, 1956 [119]).

368

BIBLIOGRAPHY

[103] Roberts, J.W. (1981): A service system with heterogeneous user requirements applications to multiservice telecommunication systems. Performance of data communication systems and their applications. G. Pujolle (editor), NorthHolland Publ. Co. 1981, pp. 423431. [104] Roberts, J.W. (2001): Trac theory and the Internet. IEEE Communications Magazine Vol. 39 (2001) : 1, 9499. [105] Ross, K.W. & Tsang, D. (1990): Teletrac engineering for productform circuit switched networks. Adv. Appl. Prob., Vol. 22 (1990) 657675. [106] Ross, K.W. & Tsang, D. (1990): Algorithms to determine exact blocking probabilities for multirate tree networks. IEEE Transactions on Communications. Vol. 38 (1990) : 8, 12661271. [107] Rnnblom, N. (1958): Trac loss of a circuit group consisting of bothway circuits o which is accessible for the internal and external trac of a subscriber group. TELE (English edition), 1959 : 2, 7992. [108] Sanders, B. & Haemers, W.H. & Wilcke, R. (1983): Simple approximate techniques for congestion functions for smooth and peaked trac. ITC10, Tenth International Teletrac Congress, Montreal, June 1983. Paper 4.4b1. 7 pp. [109] Stepanov, S.S. (1989): Optimization of numerical estimation of characteristics of multiow models with repeated calls. Problems of Information Transmission, Vol. 25 (1989) : 2, 6778. [110] Strmer, H. (1963): Asymptotische Nherungen fr die Erlangsche Verlustformel. AEU, o a u Archiv der Elektrischen Ubertragung, Vol. 17 (1963) : 10, 476478. [111] Sutton, D.J. (1980): The application of reversible Markov population processes to teletrac. A.T.R. Vol. 13 (1980) : 2, 38. [112] Szybicki, E. (1967): Numerical Methods in the Use of Computers for Telephone Trac Theory Applications. Ericsson Technics 1967, pp. 439475. [113] Techguide (2001): Inverse Multiplexing scalable bandwidth solutions for the WAN. Techguide (The Technologu Guide Series), 2001, 46 pp. <www.techguide.com> [114] Vaulot, E. & Chaveau, J. (1949): Extension de la formule dErlang au cas ou le trac est fonction du nombre dabonns occups. Annales de Tlcommunications, Vol. 4 (1949) e e ee 319324. [115] Veir, B. (2002): Proposed Grade of Service chapter for handbook. ITUT Study Group 2, WP 3/2. September 2001. 5 pp. [116] Villn, M. (2002): Overview of ITU Recommendations on trac engineering. ITUT e Study Group 2, COM 2-KS 48/2-E. May 2002. 21 pp.

BIBLIOGRAPHY

369

[117] Wallstrm, B. (1964): A distribution model for telephone trac with varying call o intensity, including overow trac. Ericsson Technics, 1964, No. 2, pp. 183202. [118] Wallstrm, B. (1966): Congestion studies in telephone systems with overow facilities. o Ericsson Technics, No. 3, 1966, pp. 187351. [119] Wilkinson, R.I. (1956): Theories for toll trac engineering in the U.S.A. The Bell System Technical Journal, Vol. 35 (1956) 421514.

Author index
Abate, J., 270, 361 AguilarIgartua, M., 179, 367 Andersen, B., 346, 361 Ash, G.R., 361 Baskett, F., 336, 361 Bear, D., 220, 361 Bech, N.I., 171, 361 Bolotin, V.A., 361 Bretschneider, G., 171, 174, 361 Brockmeyer, E., 123, 167, 171, 271, 361 Burke, P.J., 323, 324, 361 Buzen, J.P., 331 Cardoso, D., 122, 363 Chandy, K.M., 336, 361 Chaveau, J., 368 Chia, S., 363 Christensen, P.V., 362 Cobham, A., 287, 362 Conway, A.E., 341, 362 Cooper, R.B., 362 Cox, D.R., 66, 362 Craveirinha, J., 122, 363 Crommelin, C.D., 271, 362 Delbrouck, L.E.N., 216, 362 Dickmeiss, A., 362 Eilon, S., 82, 362 Elldin, A., 362 Engset, T.O., 124, 142, 362 Erlang, A.K., 21, 72, 108, 363 Erramilli A., 365 Esteves, J.S., 122, 363 Farmer, R.F., 123, 363 Feller, W., 63, 245, 352, 363 Flannery, B.P., 367 Fortet, R., 212, 363 Fredericks, A.A., 177, 363 Fry, T.C., 84, 124, 271, 272, 363 Garc aHaro, J., 179, 367 Georganas, N.D., 341, 362 Gordon, W.J., 326, 363 Grandjean, Ch., 212, 363 Grillo, D., 363 Haemers, W.H., 181, 368 Halstrm, H.L., 361 Hansen, N.H., 346, 361 Hayward, W.S. Jr., 177, 355, 363 Isham, V., 362 ITU-T, 363 Iversen, V.B., 2326, 68, 73, 136, 154, 201, 204, 216, 274, 346, 348, 354, 355, 358, 361, 363, 364, 366 Jackson, J.R., 324, 325, 364 Jackson, R.R.P., 364 Jagerman, D.L., 123, 364 Jagers, A.A., 122, 364 Jensen, Arne, 84, 110, 126, 190, 194, 225, 226, 235, 239, 240, 361, 364, 365 Jerkins, J.L., 365 Johannsen, F., 32, 365 Johansen, J., 179, 365 Johansen, K., 179, 365 Joys, L.A., 140, 365 Karlsson, S.A., 347, 365 Kaufman, I., 123, 363 Kaufman, J.S., 212, 365 Keilson, J., 270, 365 Kelly, F.P., 324, 365 Kendall, D.G., 262, 281, 282, 365

Author index Khintchine, A.Y., 75, 272, 366 Kingman, J.F.C., 191, 366 Kleinrock, L., 285, 327, 342, 343, 366 Kosten, L., 169, 366 Kruithof, J., 366 Kuczura, A., 96, 182, 184, 366 Larsen, M., 362 Lavenberg, S.S., 334, 366 Leung, K.K., 363 Lind, G., 362, 366 Listov-Saabye, H., 204, 366 Little, J.D.C., 366 Lvy-Soussan, G., 124, 366 e Maral, G., 14, 366 Marchal, W.G., 280, 366 Mejlbro, L., 124, 366 Messerli, E.J., 122, 367 Miller, H.D., 362 Moe, K., 126 Molina, E.C., 124, 367 Muntz, R.R., 336, 361 Neidhardt, A.L., 365 Newell, G.F., 326, 363 Nielsen, B.F., 68, 364 Palacios, F.G., 336, 361 Palm, C., 43, 59, 72, 93, 117, 245, 352, 355, 367 Panken, F.J.M., 367 PostigoBoix, M., 179, 367 Press, W.H., 367 Rnnblom, N., 195, 368 o Rabe, F.W., 352, 367 Raikov, D.A., 95 Rapp, Y., 124, 174, 367 Rasmussen, C., 179, 365 Reiser, M., 334, 366 Riordan, J., 170, 367 Roberts, J.W., 212, 368 Ross, K.W., 216, 368 Samuelson, P.A., 126 Sanders, B., 136, 181, 235, 364, 368 Skoog, R.A., 363 Strmer, H., 124, 368 o Stepanov, S.N., 115, 204, 364, 368 Sutton, D.J., 191, 368 Szybicki, E., 123, 124, 368 Techguide, 179, 368 Teukolsky, S.A., 367 Tsang, D., 216, 368 van Doorn, E.A., 122, 364, 367 Vaulot, E., 368 Veir, B., 35, 368 Vetterling, W.T., 367 Villn, M., 368 e Wallstrm, B., 151, 171, 369 o Wang, J.L., 365 Whitt, W., 270, 361 Wilcke, R., 181, 368 Wilkinson, R.I., 171, 369

371

Index
A-subscriber, 7 accessibility full, 101 delay system, 229 Engset, 133 Erlang-B, 101 restricted, 162 ad-hoc network, 94 Aloha protocol, 90, 107 alternative routing, 162, 223 arrival process generalised, 182 arrival theorem, 143, 334 assignment demand, 15 xed, 15 ATMOS-tool, 204 availability, 101 B-ISDN, 8 B-subscriber, 7 balance detailed, 192 global, 188 local, 192 balance equations, 105 balking, 265 Basic Bandwidth Unit, 195, 296 batch Poisson process, 157 batch-blocking, 158 BBU, 195, 201, 296 BCC, 102 BCH, 124 BCMP queueing networks, 336, 361 Berkeleys method, 181 billing, 355 Binomial distribution, 92, 135 trac characteristics, 139 truncated, 142 binomial moment, 44 Binomial process, 91, 92 Binomial theorem, 53 Binomial-case, 135 blocked calls cleared, 102 Blocked Calls Held, BCH, 124 blocking, 175 blocking concept, 26 BPP-trac, 135, 193, 194 Brockmeyers system, 169, 171 Burkes theorem, 323 bursty trac, 171 Busy, 32 busy hour, 23, 24 time consistent, 24 Buzens algorithm, 331 CAC moving window, 122 call duration, 30 call intensity, 21 capacity allocation, 341 carried trac, 20, 109 carrier frequency system, 13 CCS, 22 cdf, 42 central moment, 44 central server system, 331, 332 chain queueing network, 322, 337 channel allocation, 9 charging, 347 circuit-switching, 14 circulation time, 246 class limitation, 193 client-server, 245 code receiver, 7

INDEX code transmitter, 7 coecient of variation, 44, 353 complementary distribution function, 42 compound distribution, 58 Poisson distribution, 352 concentration, 26 conditional probability, 46 condence interval, 356 congestion call, 27, 108, 203 time, 27, 108, 202 trac, 27, 109, 204 virtual, 27 connection-less, 14, 15 connection-oriented, 14 conservation law, 285 control channel, 9 control path, 6 convolution, 54, 56 convolution algorithm loss systems, 200 multiple chains, 337 single chain, 329 cord, 7 Cox distribution, 66 Cox2 arrival process, 184 CSMA, 16 cumulants, 44 cut equations, 104 cyclic search, 8 D/M/1, 283 data signalling speed, 22 de-convolution, 204 death rate, 47 decomposition, 68 decomposition theorem, 95 DECT, 11 Delbroucks algorithm, 216 density function, 42 dimensioning, 126 xed blocking, 126 improvement principle, 127 direct route, 162 distribution function, 42 drop tail, 270 Ek /D/r, 277 EART, 171 EBHC, 22 EERTmethod, 174 eective bandwidth, 195 Engset distribution, 141 Engsets formula recursion, 147 Engset-case, 135 equilibrium points, 269 equivalent system, 173 erlang, 20 Erlang B-formula inverse, 123 Erlang x-point method, 217 Erlangs B-formula, 107, 108 convexity, 122 hyper-exponential service, 189 multi-dimensional, 187 recursion, 116 Erlangs C-formula, 232 Erlangs delay system, 229 state transition diagram, 230 Erlangs extended B-formula, 119 Erlangs ideal grading, 163 Erlangs interconnection formula, 164 Erlang-B formula multi-dimensional, 190 Erlang-book, 363 Erlang-case, 134 Erlang-k distribution, 56, 92 ERM = ERTMethod, 171 ERTmethod, 171 exponential distribution, 42, 87, 92 in parallel, 59 decomposition, 68 in series, 55 minimum of k, 53 factorial moment, 44 fair queueing, 293 Feller-Jensens identity, 84 at distribution, 59

373

374 at rate, 356 ow-balance equation, 325 forced disconnection, 28 form factor, 45 Fortet & Grandjean algorithm, 212 forward recurrence time, 50 fractile, 45 Fredericks & Haywards method, 177 gamma distribution, 71 gamma function, 46 incomplete, 119 geometric distribution, 92 GI/G/1, 279 GI/M/1, 280 FCFS, 283 GoS, 126 Grade-of-Service, 126 GSM, 11 hand-over, 10 hazard function, 47 HCS, 176 heavy-tailed distribution, 72, 154 hierarchical cellular system, 176 HOL, 264 hub, 15 human-factors, 32 hunting cyclic, 102 ordered, 102 random, 102 sequential, 102 hyper-exponential distribution, 60 hypoexponential, 55 hypo-exponential distribution, 55 IDC, 78 IDI, 78 IID, 78 IMA, 179 improvement function, 110, 238 improvement principle, 127 improvement value, 130 independence assumption, 327 index of dispersion

INDEX counts, 78 intervals, 78 insensitivity, 121 Integrated Services Digital Network, 8 intensity, 92 inter-active system, 246 interrupted Poisson process, 96, 182 interval representation, 77, 84, 346 inverse multiplexing, 179 IPP, 96, 98, 182 Iridium, 11 IS = Innite Server, 323 ISDN, 8 iterative studies, 3 ITU-T, 228 Jackson net, 324 jockeying, 265 Karlsson charging, 347, 354, 356 Kaufman & Roberts algorithm, 212 Kingmans inequality, 280 Kleinrocks square root law, 342 Kolmogorovs criteria, 192 Kostens system, 169 Kruithofs double factor method, 218 lack of memory, 47 Lagrange multiplier, 226, 239, 343 LAN, 16 last-look principle, 347 leaky bucket, 279 life-time, 41 Lindley equations, 267 line-switching, 14 Littles theorem, 82 load function, 266 local exchange, 13 log-normal distribution, 72 loss system, 27 M/D/1/k, 278 M/D/n, 271, 276 M/G/, 323 M/G/1, 267 M/G/1-LCFS-PR, 324

INDEX M/G/1-PS, 323 M/G/1/k, 270 M/G/n-GPS, 323 M/M/1, 243, 301 M/M/n, 229, 308, 323 M/M/n, FCFS, 240 M/M/n/S/S, 245 machine repair model, 229 macrocell, 176 man-machine, 2 Marchals approximation, 280 Markov property, 41 Markovian property, 47 mean value, 44 mean waiting time, 237 measuring methods, 346 continuous, 346, 350 discrete, 346 horizontal, 348 vertical, 347 measuring period unlimited, 350, 353 median, 45 mesh network, 13, 15 message-switching, 15 microcell, 176 microprocessor, 6 mobile communication, 9 modeling, 2 Moes principle, 126, 224, 239, 365 delay systems, 239 loss systems, 127 multi-dimensional Erlang-B, 187 loss system, 193 multi-rate trac, 195, 208 multinomial coecient, 68 multinomial distribution, 67 multinomial theorem, 68 multiplexing frequency, 13 pulse-code, 13 time, 13 MVA-algorithm single chain, 322, 334 negative Binomial case, 135 negative Binomial distribution, 92 network management, 228 Newton-Raphson iteration, 123 Newton-Raphsons method, 174 node equations, 103 non-central moment, 43 non-preemptive, 264 notation distributions, 71 Kendalls, 262 number representation, 76, 84, 346 ODell grading, 162 oered trac, 21 denition, 102, 134 on/o source, 136 ordinarity, 81 overow theory, 161 packet switching, 15 paging, 11 Palms form factor, 45 Palms identity, 43 Palms machine-repair model, 246 optimising, 254 Palms theorem, 93 Palm-Wallstrm-case, 135 o paradox, 242 parcel blocking, 175 Pareto distribution, 72, 154 partial blocking, 158 Pascal distribution, 92 Pascal-case, 135 PASTA property, 109, 188 PASTAproperty, 93 PCM-system, 13 PCT-I, 102, 134 PCT-II, 135, 136 pdf, 42 peakedness, 106, 110, 171 percentile, 45 persistence, 32 point process, 76 independence, 80

375

376 simple, 76, 81 stationary, 80 Poisson distribution, 88, 92, 103 calculation, 116 truncated, 107, 108 Poisson process, 75, 92 Poisson-case, 134 polynomial distribution, 67, 303 polynomial trial, 67 potential trac, 22 preemptive, 264 preferential trac, 33 primary route, 162 Processor-Sharing, 293 product form, 188, 325 protocol, 8 PS, 294 pseudo random trac, 136 Pure Chance Trac Type I, 102, 134 Type II, 135 QoS, 126 Quality-of-Service, 126 quantile, 45 queueing networks, 321 Raikovs theorem, 95 random trac, 134 random variable, 41 in parallel, 58 in series, 54 jth largest, 53 Rapps approximation, 174 reduced load method, 217 regeneration points, 269 regenerative process, 269 register, 6, 7 rejected trac, 21 relative accuracy, 353 reneging, 265 renewal process, 78 residual life-time, 46 response time, 245 reversible process, 191, 193, 324 ring network, 13 roaming, 10 roulette simulation, 359 Round Robin, 293 RR, 293

INDEX

sampling theory, 348 Sanders method, 181 scanning method, 347, 353 secondary route, 162 service protection, 162 service ratio, 255 service time, 30 simplicity, 81 SJF, 288 SLA, 37 slot, 90 SM, 21 smooth trac, 141, 171 sojourn time, 245 space divided system, 6 SPC-system, 7 sporadic source, 136 square root law, 342 standard deviation, 44 star network, 13 state transition diagram general procedure, 114 statistical equilibrium, 104 statistical multiplexing, 26 STD, 101 steep distributions, 55 stochastic process, 5 store-and-forward, 15 strategy, 3 structure, 3 subscriber-behaviour, 32 superposition theorem, 93 survival distribution function, 42 symmetric queueing systems, 302, 309, 324 table Erlangs B-formula, 117 telecommunication network, 12 telephone system

INDEX conventional, 5 software controlled, 7 teletrac theory terminology, 3 trac concepts, 18 time distributions, 41 time division, 6 time-out, 28, 265 trac channels, 9 trac concentration, 26 trac intensity, 20, 351 trac matrix, 217 trac measurements, 345 trac splitting, 178 trac unit, 20 trac variations, 23 trac volume, 21, 351 transit exchange, 13 transit network, 13 triangle optimization, 227 user perceived QoS, 27 utilization, 22, 127 variate, 72 VBR, 195 virtual circuit protection, 193 virtual queue length, 235 virtual waiting time, 266 voice path, 6 VSAT, 14 waiting time distribution, 49 FCFS, 240 Weibull distribution, 48, 71 Westerbergs distribution, 355 Wilkinsons equivalence method, 171 wired logic, 3 wireless communication, 9 work conserving, 266

377

378

INDEX

INDEX Technical University of Denmark DTUPhotonics, Networks group

379 Teletrac Engineering & Network Planning Course 34 340

Exercise 9.27 Classical models

(Exam 2010)

We consider Erlangs loss system with n = 4 channels. The arrival rate is = 4 calls per time-unit, and the service rate is = 2 calls per time-unit. In the following we assume statistical equilibrium. 1. Find the oered trac. 2. Find the state probabilities of the system. 3. Find the time congestion by using the recursion formula for Erlang-B. The individual steps of the recursion should be visible in the answer. 4. Find the distribution of number of blocked calls during a busy period where all channels are busy. Find the mean value? We now consider Erlangs delay system with same parameters as above. 5. Find the probability of delay, the mean waiting time for all calls, and the mean waiting time for delayed calls. We then consider Palmss machine repair model with S = 4 terminals and n = 1 computer. Thinking time is exponentially distributed with rate = 1 [time-units1 ], and service time is exponentially distributed with rate = 2 [time-units1 ]. 6. Find the utilization of the computer and the mean waiting time at the computer.

380 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 9.27 CLASSICAL SYSTEMS Question 1:

Exam 2010

By denition the oered trac is the average number of call attempts per mean service time (1.2): A = 4 = , 2

A = 2 [erlang] . Question 2: The state transition diagram is shown below. We include the transition due to a blocked call (which dont change the state) because we use it in Question 4.
     j j j j j

    

The relative state probabilities q(i) are obtained by cut equations, and the true state probabilities are obtained by normalization: q(0) = q(1) = q(2) = q(3) = q(4) = 1 2 2
4 3 2 3

p(0) p(1) p(2) p(3) p(4)

= = = = =

3 21 6 21 6 21 4 21 2 21

Total = 7

Total =

INDEX Question 3:

381

We use the recursion formula (4.29) for evaluating Erlangs B-formula (Erlangs rst formula): E1,x (A) = A E1,x1 (A) , x + A E1,x1 (A) E1,0 (2) = E1,1 (2) = 1, 21 2 = , 1+21 3
2 2 3 2+2 2 3

E1,0 (A) = 1 ,

x = 1, 2, . . .

Inserting A = 2 erlang we nd:

E1,2 (2) =

2 , 5 4 , 19

E1,3 (2) =

2 2 5 3+2

2 5

4 2 19 2 E1,4 (2) = , 4 = 21 4 + 2 19

which is in agreement with state probability p(4) in Question 2. Question 4: Let us denote the arrival rate, respectively the service rate, in state 4 by (4), respectively (4). Given we are in state 4, then the probability that next event is a call attempt(which will be blocked) is: (4) 4 1 p= = = . (4) + (4) 4+8 3 The probability that next event is a departure, which terminates the busy period, is: 1p= (4) 8 2 = = . (4) + (4) 4+8 3

As there is no memory in the process, we nd the probability that i call attempts are blocked during a busy period becomes a geometric distribution: p(i) = 1 1 3 1 3
i

i = 0, 1, 2 . . . .

As the distribution starts with value zero (See Table 3.1), the mean value becomes: m1 = 1 1
1 3

1=

1 . 2

382 Question 5:

INDEX

This is Erlangs delay system with the same parameters n = 4 and A = 2 as above. We have a simple relationship between Erlangs B-formula and Erlangs C-formula (9.9): E2,n (A) = n E1,n (A) n A (1 E1,n (A)
2 ) 21

2 4 21 = 4 2 (1

4 = 0.1739 23 From (9.15), respectively (9.17) we get (s = 1/ is the mean holding time): = Wn (A) = E2,n (A) W4 (2) = wn (A) = w4 (2) = Question 6: Palms machine/repair model has the same state transition diagram as Erlangs loss system. We observe that the service ratio / = 2 and number of terminals S = 4 are the same parameters as above. We have changed the time scale so that the mean service time of the computer is 1/2. The computer is working except when all terminals (channels) are busy (9.37): 19 y = 1 E1,n (/) = . 21 The waiting time is equal to the response time (9.46) minus the service time: mw = mr ms = S ms mt ms 1 E1 , n(/) 4 1 1 1 , 2 2 1 21 2 27 [time units] . 38
1 4 s = 2 , nA 23 4 2

1 [time units] . 23
1 s = 2 , na 42

1 [time units] . 4

mw =

INDEX Technical University of Denmark DTUPhotonics, Networks group

383 Teletrac Engineering & Network Planning Course 34 340

Exercise 6.15

(Exam 2010)

E2 /M/n loss system We consider a loss system (BCC = blocked calls cleared) with n channels. The holding times are exponentially distributed with mean value 1/. The inter-arrival times are Erlang-2 distributed with the arrival rate 2 in each phase. 1. Find the oered trac. The state of the system is dened by (i, j) where i is the number of customers in the system and j is the phase of the arrival process: phase phase two = b. The structure of the state transition diagram is as follows:
.. .......... .......... ........... ........... ............. ..... ....... .. .. ... ... .. .. ... ... .. .. .. .. .. .. .. .. . . .. .. .. . . . . . . . . . . . . . . .. . . ............................ ............................ ... ........................ . . .. ............................ . . ............................ . . . . . . . . . ......................... . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. . .. .. .. . . . ... . ... . . ..... ..... ............ ... . ... . .............. ................ ............ ........ .... .. .. ... . . ... . ... ... . . . . . . .. .. .. . . . . . . . . .. .. .. . . . .. .. .. . . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . . . . . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . . . .. .. .. .. .. . .. . . . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . . . . .. .. .. . . . . . . . . . .. .. .. . . . .. .. .. . . . . . . . . . . . . .. .. .. .. .. .. . . . . . . ... ... ... .. .. .. .. .. .. .. .. .. .. .. . . . . .. .. .. . . ..... ..... .... ..... ... ... ..... ... ... ........ ... .... ....... .... ....... .... ....... .. .. .... .. .... . .... . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . ............................. . ... ........................ ............................ . . . ............................ . . . . ... ........................ . . . ......................... . . . . . . .. . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. . ... . ... ... . . ... ... ... ............ ........... ........... ....... ... ... ... ....... .......

0, a

1, a

2, a

. .. .................................... ..................................... .

. .. .. . ... .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .................................... ..................................... . .. .

. . . . .

n1, a
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . . . .

0, b

1, b

2, b

 

n1, b

.......... ........... .. ... .. ... .. .. . .. . . . . . ........................... ............................ . . .. . . . . . . . .. .. .. .. . . ............ ...... .................... .. .. .. . ... . .. .. . . . .. .. . .. .. . . .. .. . .. . . . . .. . . .. . . . . . . .. . . .. . . . . .. . . .. . . . . . . . . .. . . . . .. . . .. . . . . . .. . . .. . . . . . .. . .. . . . . . . .. . .. . . . . . .. . . .. . . . . .. . . .. . . . . . . .. . . . . . . .. .. . . . . .. . . .. .. . . ... . .. .. . .. .. . . . . .. .. ............. .............. .. .. ... .. ... .. .. . . .. . . . . . ........................... ............................ . .. . . . . . . . .. . .. .. .. .. ... ..... ....... ..... ...... ..

n, a

n, b

2. Complete the state transition diagram by inserting the transition rates. 3. Find the time congestion E and trac congestion C expressed by state probabilities p(i, j). 4. Find the state probabilities (i, j) observed by a call just before entering the system, and nd the call congestion B. 5. Find the numerical values of state probabilities when n = 2 channels, = 1 [timeunit1 ], and = 1 [time-unit1 ]. Start by using p(2, b) = 4/58, and use the node balance equations for states [2b], [2a], [1b], [1a] etc. 6. Find numerical values of time congestion E, call congestion B, and trac congestion C.

384 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 6.15 Question 1:

(Exam 2010) Exercise 2010-2 = E2 /M/n LOSS SYSTEM

By denition the oered trac is the average number of call attempts per mean service time (1.2). The mean inter-arrival time is: 1 1 1 + = 2 2 so that the average arrival rate becomes calls per time unit. The oered trac then becomes: A= . Question 2:
... ........... ........... ........... ............ ............ .... ...... ... ... ... .. .. .. ... ... .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . .. . ... ........................ . .. ......................... ........................... . ... ...................... . . .......................... . . . . . . . . . . . .. . .......................... . . . . . . . . .. . . . . . . . . . . .. .. .. .. .. .. .. . .. .. .. .. . ... ... ...... ..... . ...... ... ... ... ................ ... ................ . ......... ......... ..... ... .. .. .. . . ... . ... ... . .. .. . . .. .. .. . . . . . . . . .. .. .. . . . .. .. .. . . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . .. .. .. . . . . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. . . . .. .. .. . . . . . . .. .. .. .. .. .. . . . . . . . . . .. .. .. .. .. .. . . . . . . . . . .. .. .. . . . .. .. .. . . . . . . . . . .. .. .. . . . .. .. .. .. . . . . .. .. .. .. .. .. .. .. .. .. . . .. .. . . .. .. . ..... ..... ..... .. .. .. ...... ...... .............. .............. .... ....... .... ...... .... ...... .. .. ... ... .. ... .. .. .. .. .. .. . . . . . . . . . . . . . . . . . .. . . . ... . ... . ........................ . ... ........................ .. .......................... . ........................... . . . . . . . . . . . . ......................... . . .. ........................ . .. . .. . .. . . . . . . . . . .. . . .. .. .. . . . .. .. .. .. .. . . .. . ... ... ... ... ... ... ..... ....... ... .. ... ... ......... ..... ... ......... ......... ........ ..

0, a

1, a

2, a

. .. .................................... .....................................

(n1)

. . .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. ..

. . . . .

n1, a
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . . .

0, b

1, b

2, b

.................................... ..................................... . .. .

n1, b n (n1)  

........... ............ ... .. ... .. .. . .. . . . . . . .......................... .......................... . . . .. . . . . . . . . .. .. .. .. . . ..... ..... ........ .......... .. .. . ... ... .... .. .. .. .. . . .. . .. .. . .. . .. . .. . .. . . . . . .. . . .. . . . . . .. . . .. . . . . . . .. . . .. . . . . .. . . .. . . . . .. . . .. . . . . . . .. .. . . . . . . .. . .. . . . . . .. .. . . . . . . .. . . .. . . . . .. . .. . . . . . .. . . .. . . . . .. . . .. . . . . .. .. .. .. . .. . . .. .. .. .. . . . . .......... .. .............. .. .. .. .. ... ... .. .. . . . . . . . ......................... .......................... .. . . .. .. . . . . . . . .. . .. .. .. .. ... ..... ....... ..... ... .. ..

n, a

n, b

Question 3: The time congestion E is per denition the proportion of time all channels are busy, or the proportion of time call attempts are blocked: E = p(n, a) + p(n, b) . The trac congestion C is by denition the proportion of the oered trac A which is blocked. The carried trac is:
n

Y =
i=0

i {p(i, a) + p(i, b)} AY . A

Thus we get: C=

INDEX Question 4:

385

Call attempts are only generated when the arrival process is in phase b. So a call attempt generated in state i, b se this state just before entering. The proportion of call attempts observing state [i, b] becomes (i) = 2 p(i, b) 2 p(0, b) + 2 p(1, b) + . . . + 2 p(n, b)
n j=0

(i) =

p(i, b) . p(j, b)

The call congestion is by denition the proportion of call attempts which are blocked. Only call attempts generated in state [n, b] are blocked: B = (n, b) = Question 5: For n = 2 channels, = = 1 [time-units1 ] we get the following state-transition diagram.
......... ......... ......... ........... ........... ........... .... .... .... .. .. .. ... ... ... .. .. .. .. .. .. . .. .. .. . . . . . . . . . . . . . . . .. . . . . . .. ............................ . . ............................ . . . .. .. ......................... . . ............................ . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. . . . .. .. .. . ... ... ..... ..... ... .. .... .................. ................ . ... . . . .......... ......... ... .. .. ... ....... .. ... ....... .. .. . . . . . .. . . . . .. .. .. . . . .. .. .. . . . . . . . . . . . .. .. .. . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . .. .. . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . .. .. .. .. .. .. . . . . . . . .. .. .. . . . . .. .. .. .. . . . .. .. . . ........... .. ............ . ........... .. .............. .............. .............. ... ... ... ... .. ... ... .. .. .. .. .. . . . .. .. .. . . . . . . . . . . . . . . . . .. . . . ............................ . ............................ . . . . . . . ............................ . . . . . . .. ........................ . . .. . . . . . . . . . . . .. . . . . . .. .. .. .. .. . . . .. .. .. .. . ... ... ... ... ... ..... ....... ... ... ... ... ... ......... ..... ... ........ ........ ........ ..

n j=0

p(n, b) . p(j, b)

0, a

1, a

2, a

0, b

1, b

2, b

Letting p(2, b) = 1 we have under the assumption of statistical equilibrium the following by using ow balance equations for nodes: ow out of a state must equal ow into this state. Below we always put ow out on the left-hand side.

386 We choose p(2, b) = 1 and get the following ow balance equations: State [2, b]: (2 + 2) p(2, b) = 2 p(2, a) p(2, a) = 2 State [2, a]: (2 + 2) p(2, a) = 2 p(2, b) + 2 p(1, b) p(1, b) = 3 State [1, b]: (1 + 2) p(1, b) = 2 p(2, b) + 2 p(1, a) p(1, a) = State [1, a]: (1 + 2) p(1, a) = 2 p(2, a) + 2 p(0, b) p(0, b) = State [0, b]: 2 p(0, b) = 1 p(1, b) + 2 p(0, a) p(0, a) = 7 4 13 4 7 2

INDEX

We thus have the following relative state probabilities q. The true state probabilities p are obtained by normalization: q(2, b) = 1 p(2, b) =
4 58 8 58 12 58 14 58 13 58 7 58

q(2, a) = 2 q(1, b) = q(1, a) = q(0, b) = q(0, a) = Total = 3


7 2 13 4 7 4 2 29

p(2, a) = p(1, b) =

p(1, a) = p(0, b) = p(0, a) = Total =

INDEX

387

This agrees with the given value of p(2, b). We notice that all states a and all states b both add to the one half, as expected. Only by starting with state (n, b) are we able to nd the relative state probabilities explicitly (cf. a system with IPP arrival process, Example 6.7.1). We cannot truncate the state probabilities of a system to a system with fewer channels and obtain the new state probabilities by re-normalizing. The relative state probabilities change values, and we have to recalculate all state probabilities from scratch. The system is not reversible. Question 6: From these numerical values we nd A = 1 and the congestion values. E = p(2, a) + p(2, b) = B = 12 , 58 p(2, b) p(0, b) + p(1, b) + p(2, b) 8 , 58

= Y

= 1 (p(1, a) + p(1, b)) + 2 (p(2, a) + p(2, b)) = 50 , 58 AY A 8 , 58

C = =

We thus notice that B = C which always is the case when we have a renewal arrival process and exponentially distributed service times.
2010-05-19

388 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 10.27

(Exam 2010)

Priority queueing system We consider a single-server system M/G/1 with Poisson arrival processes. To types of customers arrive to the system Type-1: Arrival rate 1 = 0.1 [time-units1 ], Constant service time s1 = 1 [time-unit]. Type-2: Arrival rate 2 = 0.2 [time-units1 ], Erlang-2 distributed service time with mean value s2 = 2 [time-units]. 1. Find the total oered trac and the total mean service time. 2. Find the mean waiting time for all customers without priority, using Pollaczek-Khintchines formula. We now assume that type-1 customers have non-preemptive priority over type-2 customers. 3. Find the mean waiting time for each class. 4. Check that Kleinrocks conservation law is fullled when comparing no priority with non-preemptive discipline. We now introduce a type-3 class of customers (best eort trac) which can be preempted by both type-1 and type-2, so that it does not inuence the service of the two rst classes: Type-3: Arrival rate 3 = 0.2 [time-units1 ], Exponential distributed service time with mean value s3 = 2 [time-units]. 5. Find the mean waiting time for type-3 customers. We consider only the rst two classes and introduce processor sharing (PS) without priority for serving these two classes. The total oered trac and total mean service time was obtained in Question 1. 6. Find the mean waiting time for all customers and the mean mean waiting for each of the two classes.

INDEX Technical University of Denmark DTUPhotonics, Networks group

389 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 10.27

Exam 2010

PRIORITY QUEUEING SYSTEM Question 1: By denition the oered trac is the average number of calls per mean service time (1.2): A1 = 1 s1 = 0.1 1 = 0.1 [erlang] A2 = 2 s2 = 0.2 2 = 0.4 [erlang] At = A1 + A2 = 0.5 [erlang] The total arrival rate is 1 + 2 = 0.3 [time-units1 ]. From the total oered trac we then get the mean service time for all calls: st = At 0.5 5 = = [time-units] . t 0.3 3

We could of course also nds the mean service time for all customers by weighting the mean values (combination i parallel, Sec. 2.3.2): st = = = Question 2: We use Pollaczek-Khintchines formula (10.3) where V is given by (10.4) for the total trac process, or by (10.59) and (10.60) when combining more classes of customers. A constant service time has second moment equal to mean value squared, whereas Erlang-2 has the second moment given by (2.16). As the mean value is 2 for the Erlang-2 distribution, each phase has rate = 1 [time-units1 ]. We get for the two types: m2,1 = s2 = 1 [time-units2 ] , 1 m2,2 = 2 (2 + 1) = 6 [time-units2 ] . 12 2 1 s1 + s2 1 + 2 1 + 2 0.1 0.2 1+ 2 0.1 + 0.2 0.1 + 0.2 5 3 q.e.d.

390 Thus we get: V1,2 = V1 + V2 = = 2 1 m2,1 + m2,2 2 2

INDEX

0.1 0.2 1+ 6 = 0.05 + 0.6 , 2 2

V1,2 = 0.65 [time-units] . Finally using Pollaczek-Khintichines formula (10.3) we have: W = 0.65 V1,2 = , 1 At 1 0.5

W = 1.30 [time-units] . Question 3: For non-preemptive queueing strategy we nd mean waiting time for type-1 by (10.66), and mean waiting time for type-2 by (10.68): W1 = W1 = W2 = W2 = Question 4: For non-priority system we have from Question 2: At W = 0.5 13 13 = [time-units] . 10 20 V1,2 0.65 , = 1 A1 1 0.1 13 [time-units] , 18
13 W1 V1,2 18 = , = (1 A1 )(1 (A1 + A2 ) 1 At 1 0.5

13 [time-units] . 9

For non-preemptive priorities we get from Question 3:


2

i=1

A1 Wi = 0.1

13 13 13 + 0.4 = [time-units] . 18 9 20

Thus we see that the Conservation Law (10.63) is fullled. The average waiting time for all classes weighted by the trac (load) of the mentioned class, is independent of the queue discipline.

INDEX Question 5:

391

The internal discipline between type-1 and type-2 has no inuence upon the service of type-3. The second moment of the exponential distribution is given by (2.15). We have: 3 0.2 2 m2,3 = 1 2 = 0.8 2 2 (2)

V3 =

V1,3 = V1 + V2 + V3 = 0.05 + 0.6 + 0.8 V1,3 = 1.45 [time-units] . As type-3 is preempted by both type-1 and type-2 we nd (10.77) A0,p1 V1,p sp , + {1 A0,p1 } {1 A0,p } 1 A0,p1 V1,3 A1 + A 2 + s3 {1 (A1 + A2 )}{1 (A1 + A2 + A3 )} 1 (A1 + A2 ) 0.5 1.45 + 2 = 29.5 + 2 , 0.5 0.1 0.5

Wp =

W3 = =

W3 = 31.5 [time-units] . Question 6: M/G/1PS has the same state probabilities as M/M/1 (10.79) or Sec. 12.2: p(i) = (1 At ) Ai , t i = 0, 1, 2 . . .

where the total oered trac At and the total mean service time st was obtained in Question 2. The mean waiting time for all jobs becomes (9.32): At 0.5 5 st = , 1 At 0.5 3 5 [time-units] . 3

W = W =

As the mean waiting time is proportional to the job duration, type-2 jobs on the average has the double waiting time of type-1 jobs: W2 = 2 W1 . So we split the total waiting time

392 according to the number of jobs: W = = 1 2 5 = W1 + W2 3 1 + 2 1 + 2 1 2 5 W1 + (2 W1 ) = W1 , 3 3 3

INDEX

W1 = 1 [time-units] , W2 = 2 [time-units] .
2010-05-18

INDEX Technical University of Denmark DTUPhotonics, Networks group

393 Teletrac Engineering & Network Planning Course 34 340

Exercise 1.1 TRAFFIC PROCESS Below we show a trac process and the carried trac, when the number of channels is sucient (n 8) (page 2). Also for n = 4 we show on page 3 the carried trac and the blocked trac. The performance measures time, call, and trac congestion are given in the tables on page 4. (The column with Erlang-B values are obtained from a table or a compyuter program and dealt with in Chap. 4). Note that we only include what happens within the observation period of 40 time units. Within this period we have 32 calls arriving (the rst 3 arrive before the period). For the holding times we include parts of the rst 3 calls (which are not counted) and exclude parts of the last 3 calls (which are counted). On the average the two contributions balance each other. We now assume that the number of channels is n = 6. 1. Draw the carried trac upon the upper grid and the lost trac upon the lower grid on page 2. 2. Fill out the missing information in Table 1 and 2 on page 4.

Updated 2010-02-05

394 12 11 10 5 4 3 2 1 0 6 5 10 9 8 7 14 13 15 16 15 19 18 17 20 21 20 25 23 30 22 24 25 31 30 29 28 35 34 33 32 27 26 35

INDEX

40

INDEX 12 11 10 5 4 3 2 1 0 6 5 10 9 8 7 14 13 15 16 15 19 18 17 20 21 20 25 23 30 22 24 25 31 30 29 28 35 34 33 32 40 27 26 35

395

396

INDEX

Oered trac i 0 1 2 3 4 5 6 7 8 px tpx tpx tpo 0 3 5 8 10 6 4 3 1 po tpo 0 3 10 24 40 30 24 21 8

n8

n=6 Carried trac tpc pc tpc Rejected trac tpr pr tpr Carried trac tpc 1 4 10 9 16 0 0 0 0

n=4 Rejected trac tpr 20 6 7 3 4 0 0 0 0 pr tpr 0 6 14 9 16 0 0 0 0

pc tpc 0 4 20 27 64 0 0 0 0

40 160 160 =4.0 40

40 115 115 =2.875 40

40 45 45 =1.125 40

Table 13.3: For n = 8 there is no blocking and the carried trac (index c) equals the oered trac
(index o). For n = 6, respectively n = 4, some calls are rejected (index r).

MEASURED n 8 6 9, 10, 11, 12, 25, 26, 27, 35 Rejected call numbers none # of calls rejected 0 B 0
1 40

ErlangB E = 0.03 C 0 E 0.03 0.12

8 32

= 0.25

16 40

= 0.40

1.125 4.0

= 0.28

0.31

Table 13.4: Comparison of call congestion B, time congestion E, and trac congestion C.

INDEX

397

398

INDEX

Technical University of Denmark COM DTU, Networks Solution to exercise 2.2 12 11 10 5 4 3 2 1 0 6 5 10 9 8 7 14 13 15 16 15

Teletrac Engineering & Network Planning Course 34 340

27 26 25 22 19 18 17 20 21 20 25 23 30 24 31 30 29 28 35 34 33 32 40 35

INDEX

399

400 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 1.2

(Exercise)

OFFERED TRAFFIC

1. We consider an Internet-caf. Customers arrive at random. On the average 20 customers e arrive per hour. The average time using a terminal is 15 minutes. Quest. 1.1: Find the oered trac measured in speech minutes during one hour. Quest. 1.2: Find the oered trac measured in erlangs. 2. We consider a cell in a cellular system. There are two arrival processes. Hand-over calls arrive with rate 3 calls per minute, and the mean holding time is 90 seconds. New Calls arrive with 240 calls per hour and the mean holding time is 2 minutes. Quest. 2.1: Find the oered trac for each trac stream and the total oered trac. 3. To a computer system three types of tasks arrive: a) inter-active tasks, b) test tasks, and c) production tasks. All tasks arrive according to a Poisson proces, and the service times are constant. For type a) 15 tasks arrive per minute, and the service time is 1 second. For type b) 3 tasks arrive per minute, and the service time is 5 seconds. For type c) 12 tasks arrive per hour, and the service time is 2 minutes. Quest. 3.1: Find the oered trac for each type and the total oered trac. 4. The arrival process to a systems occurs according to a Poissonproces with rate = 2 calls per time unit. Every call occupies two channels during the whole occupation time, which is exponentially distributed with mean value s = 3 time units. Quest. 4.1: Find the oered trac in calls (connections). Quest. 4.1: Find the oered trac in channels. 5. We consider trac to a digital exchange oering ISDN calls (1 channel per call) and ISDN2 calls (2 channels per call): ISDN calls: Per hour 900 calls arrive and the mean holding time is 2 minutes. ISDN2 calls: Per minute 2 calls arrive and the mean holding time is 150 seconds.

INDEX

401

Quest. 5.1: Find the oered trac (measured in channels) for each type and the total oered trac. 6. A digital 2.048 Mbps (Mbps = Mega bits per second) link is on the average oered 128 packets per second. A packet contains on the average 1500 bytes (1 byte = 8 bits). Quest. 6.1: Find the utilisation
20090205

of the link.

402 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 1.2: OFFERED TRAFFIC Question 1: During one hour the number of speech minutes [SM] is: 20 15 minutes = 300 [SM] This is a trac volume and corresponds to 300/60 = 5 [Eh] (erlanghours). Using minutes as time unit the oered trac in erlang (1.2) becomes: A= 20 15 = 5 [erlang] 60

in agreement with that the trac volume per hour is 5 [Eh]. Question 2: Using the time unit [minutes] we get Hand-over trac: Aho = 3 New calls trac: Thus the total oered trac becomes: A = 4.5 + 8 = 12.5 [erlang] We get of course the same result using for example hours or seconds as time unit Question 3: Using minutes as time unit we get: Type a: Type b: Type c: Total: Aa = 15 Ab = Ac = At = 3 1 = 0.25 [erlang] 60 5 = 0.25 [erlang] 60 Anew 90 = 4.5 [erlang] 60 240 2 = 8 [erlang] = 60

12 2 = 0.4 [erlang] 60 0.9 [erlang]

INDEX So the utilization of the system is Question 4: The oered trac in calls becomes, using the same time unit: Acalls = 2 3 = 6 [erlang (calls)] As every call uses 2 channels the oered trac in channels becomes: Achannels = 6 2 = 12 [erlang (channels)] Question 5: = 0.9.

403

We want to nd the oerd trac in the unit [channels]. We nd using the time unit [minutes]: AISDN = 900 2 60 = 30 [erlang]

AISDN2 = 2 Atotal Question 6:

150 2 = 10 [erlang] 60 = 40 [erlang]

= 30 + 10

Per second the oered trac in bits per second becomes: A = 128 packet byte bit 1500 8 = 1, 536, 000 bits per second. second packet byte =
2008.02.14

Thus the utilization becomes:

1, 536, 000 = 0.75 2, 048, 000

404 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 2.1 FLAT DISTRIBUTION We consider the following hyper-exponential distribution: F (t) = 1 9 1 et/7 + 1 e3 t , 10 10 t 0.

1. Derive the mean value and the variance of the distribution. 2. Find the remaining life time distribution as a function of the actual age x. 3. Find the mean value m1,r (x) of the remaining life time as a function of actual age x. Draw a graph of m1,r (x) as a function of x. Find the upper limit of m1,r (x), and give an explanation of this value. 4. Show that the median of the distribution is 0.2672, and calculate the trac load from the shortest half of all jobs. (The median of a distribution function is the value for which the distribution function takes the value 0.5. Half the observations will be larger and the other half shorter than this value). The following integral are given: x ea x dx =
t 0

eax (a x 1) , a2 eat 1 (a t 1) + 2 . 2 a a

x ea x dx =

5. Find the distribution function for the remaining life time from a random point of time and nd the mean value of this distribution.

INDEX Technical University of Denmark DTUPhotonics, Networks group

405 Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 2.1: The hyper-exponential distribution given is a weighted sum of two exponential distributions with mean values m1 = 7, respectively m1 = 1/3.

.. . .. ... . ..................... ...................... .. .. .... .... .. .. .. . .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . ............................ ............................ .. . .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. . .... .... . ... ... ...................... ................ ..... .. ... .. ...

1/7

1/10

9/10

.. . .. ...................... ............... ..... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . ..... ... ... . ... .. ...................... ..... . . .. . .. . .. . ......................... .. ... ... .. .. .. ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ......... ......... ... .. ............... ............. . . .

Question 1: Expressions for mean value, second moment, variance, and form-factor are given in (2.67), respectively (2.68): 1 9 1 7+ = 1 .(2.68) 10 10 3 1 9 1 49 + 10 10 9 = 10 ,

m1 =

m2 = 2

2 = m2 m2 = 9 , 1 = m2 = 10 , m2 1

where the numerator is the second (non-central) moment, and the denominator is the square of the mean value. Question 2: The distribution function of the remaining life-time t, conditioned of an actual age x, is given

406 by (2.18), and the corresponding density function s given by (2.19): F (x + t | x) = f (x + t | x) = F (x + t) F (x) , 1 F (x) f (x + t) , 1 F (x) t, x 0 , t 0, x 0.

INDEX

The distribution function is given in the text of the exercise, and we nd the density function: f (t) = 1 10
1 10

1 1 t 9 e 7 + 3 e3t , 7 10 e 7
x

f (x + t | x) = f (x + t | x) = where

1 7 1 10

e 7 + e 7 +
x

9 10 9 10

e3x (3 e3t )

e3 x ,

1 (k1 + k2 )

k1 t e 7 + 3 k2 e3t 7
x 1 e 7 , 10

k1 = k2 =

9 e3x . 10

Here k1 is the probability that we choose the upper branch times the probability that the duration of this is longer than x. In similar way k2 is the probability that we choose the lower branch times the probability that the duration of this is longer than x. The distribution function of the remaining life-time is also a hyper-exponential distribution, composed of the same two exponential distributions as above, but with weight-factors which depend on x. F (x + t|x) can be written in a similar way: F (x + t|x) =
t 1 k1 1 e 7 + k2 1 e3t (k1 + k2 )

Question 3: The mean value of the remaining life time for a given x is obtained by exploiting that we know the mean value of a hyper-exponential distribution (2.67): m1,r (x) = k1 k2 1 7+ k1 +k2 k1 + k2 3

INDEX

407

For x = 0 we get the same result as in Question 1. For increasing x, k1 /(k1 + k2 ) converges to 1. There is an increasing probability that the observation is from the exponential distribution, which has the mean value 7: lim m1,r (x) = 7
x

8 7 6 5 4 3 2 1 0

Question 4: The median of the distribution is numerically found to me = 0.2672, as the distribution function for this values equals 1/2. This means that half of all observations are less than the median. The trac load from the shortest half of all holding times is obtained from (2.30) for x = 0.2677 and the mean value m1 = 1: x = =
0 x 0

t f (t) dt m1 t 1 1 t 9 e 7 + 3 e3t 10 7 10 dt

= 1

x 7 e 7 10

x 3 +1 e3 x (3 x + 1) , 7 10

408 The following integrals are given: x ea x dx =


t 0

INDEX

eax (a x 1) , a2 eat 1 (a t 1) + 2 . 2 a a

x ea x dx =

For x = 0.2672 we get: x = 0.0580 . Thus the shortest 50 % of all jobs only contribute to the load with 5,8 %. Question 5: The density function for the remaining life time from a random point of time is given by (2.32): v(t) = = 1 F (t) m1
t 9 1 e 7 + e3t . 10 10

The distribution function then becomes:


t

V (t) =
0

v(t) dt ,
t 7 3 1 e 7 + 1 e3t . 10 10

V (t) =

That is, we again get a hyper-exponential distribution. In comparison with the original distribution we get a weighting of the same two exponential distributions, but the weighting factors are proportional to the contributions to the mean value from the two phases in the original distribution. (see Question 1). The mean value becomes (2.34): m1 1 m1,v = = 10 = 5 , 2 2 which also is obtained from the above hyper-exponential distribution: m1,v =
2010-02-15

7 3 1 7+ = 5. 10 10 3

INDEX Technical University of Denmark DTUPhotonics, Networks group

409 Teletrac Engineering & Network Planning Course 34 340

Exercise 2.4

(exam 1988)

COX LIFE-TIME DISTRIBUTION We consider the following Cox-2 distribution, which has the same rate in both phases:
E

1-p

1. Show that the distribution function is given by: F (t) = 1 et p t et , and nd the density distribution. 2. Find the non-central moments mi of the distribution. 3. Find the distribution of the remaining life-time at a random point of time. 4. Find the death rate as a function of the actual age.
2009-02-8

t 0,

410 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 2.4: Question 1:

(exam 1988)

The phase diagram is easily transformed into the following diagram which is a combination in parallel of an exponential distribution and an Erlang-2 distribution: The distribution function 1p p F (t) becomes (2.61): F (t) = (1 p) F1 (t) + p F2 (t) , where F1 (t) is the distribution function of an exponential distribution and F2 (t) is the distribution function of an Erlang2 distribution. F1 (t) is given by (2.3): F1 (t) = 1 et . The distribution F2 (t) can be obtained in the following ways: 1. F2 (t) is obtained from (2.46), where the density function of an Erlang-k distribution is given by: (t)k1 t e dt, > 0, t 0 . fk (t) dt = (k 1)! For k = 2 we get the following result: f2 (t) = 2 t et ,
t

F2 (t) =
0 t

f (u) du , 2 u eu du ,
t 0

F2 (t) =
0

F2 (t) =

(1 + u) eu

F2 (t) = 1 et t et .

INDEX 2. F2 (t) is obtained from (2.47): F2 (t) =


j=k

411

(t)j (t)j t e = et j! j! j=k


k1 t

= 1e as
j=0

j=0

(t)j , j!

(t)j = et . j! F2 (t) = 1 et t et .

For k = 2 we have:

Thus F (t) is given by: F (t) = (1 p) (1 et ) + p (1 et t et ) = 1 et pt et . The density function is obtained in a similar way or by dierentiating the distribution function: f (t) = (1 p) et + p (t) et , f (t) = et p et + p ( t) et , t 0 . Question 2: By exploiting the theory for parallel/serial combination of random variables we get the (noncentral) moments mi (Sec. 2.3): mi = (1 p) mi (exponential) + p mi (Erlang-2) = (1 p) mi = m1 = m2 = (i + 1)! i! +p , i i

i! (1 + p i) , i 1+p , 2 + 4p . 2

412

INDEX

The moments may of course be obtained from (2.5), but this is not the intention with the question. It is sucient to calculate the rst two moments. Question 3: We want to nd the distribution of the remaining life-time at a random point of time (either density function or distribution function). The density function is obtained from (2.32): 1 F (t) m1 et + p t et , (1 + p)/ et + p t et . 1+p

v(t) =

v(t) =

(This is a sucient answer). The mean value of this becomes (2.33): m1,v = m1 m1 m2 m2 = 2 = 2 2 m1 2 m1 1 2 + 4p . 2 1+p 2 1 + 2p . (1 + p)

= m1,v =

The distribution function is obtained as follows:


t

V (t) =
0

v(u) du 1 {et p et (t + 1) + 1 + p} , 1+p p t et . 1+p

V (t) = 1 et

The various expressions are seem to be in agreement with the spacial cases p = 0 (exponential distribution) and p = 1 (Erlang2 distribution).

INDEX Question 4: The death-rate as a function of the actual age becomes (2.21): (t) = f (t) , 1 F (t)

413

et p et + p (t) et , (t) = et + pt et (t) = As a control, we again have p=0: p=1: (t) = (t) = (exponential distribution) , (Erlang2 distribution) . p + p (t) . 1 + pt

2 t 1 + t

2009-02-14

414 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 3.1 POISSON PROCESS: SUPERPOSITION THEOREM Local station 1


j 1 ~ E b & & 2 & j

Transit station

Local station 2

A transit station receives calls from two local stations. The arrival processes from the two local stations are independent and both Poisson processes with constant arrival rates 1 , respectively 2 . Show that the total arrival process to the transit station is a Poisson process with intensity = 1 + 2 by considering the: 1. number representation, 2. interval representation. 3. intensities in short time intervals.

INDEX Technical University of Denmark DTUPhotonics, Networks group

415 Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 3.1: Superposition theorem: Superposition of more Poisson processes results in a Poisson process. Question 1: Number representation. For a Poisson process, the number of events within a xed time interval is Poisson distributed (3.36). In a xed time interval t we have Nt,1 calls from local exchange 1 and Nt,2 calls from local exchange 2. Both Nt,1 and Nt,2 are Poisson distributed random variables. We shall show that the total number of calls Nt = Nt,1 + Nt,2 is a random variable which also is Poisson distributed. This we will prove in two ways. (1) Convolution
n

p {Nt = n} = =

i=0 n

p {Nt,1 = i} p {Nt,2 = n i} (1 t)i 1 t (2 t)ni 2 t e e i! (n i)! 1 n! 1 n!


n

i=0

= e

(1 +2 )t

i=0 n

n! (1 t)i (2 t)ni i ! (n i) ! n (1 t)i (2 t)ni . i

= e(1 +2 )t Using the Binomial expansion:

i=0

(a + b) =
i=0

n i ni a b , i

we get the Poisson distribution: p {Nt,s = n} = {(1 + 2 ) t}n (1 +2 )t e . n!

416

INDEX

The parameter of this Poisson distribution is the sum of the two local exchange parameters, and this is what we should show. (2) Probability generating functions (pgf) (not covered in the textbook) When we deal with discrete distributions it is easier to use probability generating functions. The probability generating function for the Poisson distribution is: f (s) =
i=0

p{N = i} si
i=0

= e

( t s)i i!

= e t (s 1) . There is a one-to-one relationship between a distribution and its probability generating function. The probability generating function of the sum of two independent random variables is obtained as the product of the probability generating function of the two random variables. We see that the sum of two Poisson distributions with parameters 1 t, respectively 2 t, becomes a Poisson distribution with parameter (1 + 2 ) t: e1 t (s 1) e2 t (s 1) = e(1 + 2 )t (s 1) . Question 2: Interval representation In a Poisson process, the interval from any point of time to the next event is exponentially distributed. The two sub-processes are independent, and next call in the total process appears when the rst call appears at any of the two local exchanges. Therefore, we shall show the the smallest of two exponentially distributed random variables also is an exponentially distributed random variable. This has already been done in Sec. 2.2.7, formula (2.41). The distribution function of the smallest becomes (2.39):
2

F (t) = 1

i=1

[1 Fi (t)]

= 1 {1 (1 e1 t )}{1 (1 e2 t )} , F (t) = 1 e(1 + 2 ) t Question 3: By intensity arguments. A Poisson proces is also characterized by the formul (3.73.9), where the factor of proporq.e.d.

INDEX tionality is constant: P (i 2, t) = P {Nt+t Nt 2} = o(t) , P (i = 1, t) = P {Nt+t Nt = 1} = t + o(t) , P (i = 0, t) = P {Nt+t Nt = 0} = 1 t + o(t) .

417

If we consider a small interval t, then the probability of no events in the total process within this interval is equal to the product of the probabilities of no events in each of the two independent sub-processes within this interval: p(i = 0, t) = p1 (i = 0, t) p2 (i = 0, t) = (1 1 t + o(t)) (1 2 t + o(t)) = 1 (1 + 2 )t + o(t) . The probability of getting just one event in the total process becomes in a similar way: P (i = 1, t) = P1 (i = 0, t) P2 (i = 1, t) + P1 (i = 1, t) P2 (i = 0, t) = (1 1 t + o(t)) (2 t + o(t)) + (1 t + o(t)) (1 2 t + o(t)) = (1 + 2 )t + o(t) . More than one events can take place in more ways, but the total probability is in all cases equal to o(t), and the sum of products also becomes o(t): P (i 2, t) = o(t) . We thus see that the total process corresponds to a Poisson process with intensity: s = 1 + 2 . Remark 1: Mathematically we have that o(t) + o(t) = o(t) and we can look away from terms of higher power o(t)i , i 2. Remark 2: Physically, it is obvious that the superposition theorem (Exercise 3.1) and the splitting theorem (Exercise 3.2) must be valid. As an example, the number of particles registered by a

418

INDEX

GeigerMller counter is Poisson distributed. The space angle covered by the counter deu pends on the distance between the radioactive material, but does not inuence the type of distribution. Processes inuenced by a large number of independent factors converges to Poisson processes.
2009-02-18

INDEX Technical University of Denmark DTUPhotonics, Networks group

419 Teletrac Engineering & Network Planning Course 34 340

Exercise 3.2 POISSON PROCESS: SPLITTING THEOREM


d Q   p    1-p s d

Local station 1

Transit station

Local station 2

Calls arrive from a transit station according to a Poisson process with constant rate to a one out of two local stations. A random call chooses independent of all other calls local station 1 with probability p and local exchange 2 with probability q = 1p. Show the the two sub-processes both are Poisson processes: 1. By number representation. (It is known that the number of calls in a xed time interval in the original process is Poisson distributed). 2. By interval representation. (It is known that the inter-arrival time distribution in the original process is an exponential distribution). 3. By considering intensities within short time intervals t. 4. Which type of arrival process do we get to local station one if every second call is routed to this station?

420 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 3.2 Splitting theorem (special case of Raikovs theorem): A random splitting of a Poisson process will result in sub-processes, which are also Poisson processes (Sec. 3.6.2). Question 1: Number representation During a time interval t the probability of observing just i events in the original process is given by the Poisson distribution: P {Nt,s = i} = (t)i t e , i! > 0, t 0, i = 0, 1, . . . .

Each individual event has, independent of all other events, the probability p for choosing direction 1. For a xed value of i, we therefore will have a Binomial distributed number n of events in direction 1: i n B{n | i} = p (1 p)in , n < i . n The probability of getting exactly n events in direction 1 is then obtained by summation over all feasible values of i n : P {Nt,1 = n} = =
i=n i=n i=n

B{n | i} P {Nt,s = i} i n (t)i t e p (1 p)in n i! i! (t)i t pn (1 p)in e n! (i n)! i!


i=n

(pt)n t = e n! = (pt)n pt e n!

{t (1 p)}in (i n)!

which is the Poisson distribution with parameter pt. The calculations are of course similar for direction 2.

INDEX Question 2: Interval representation

421

Let us imagine we are at local exchange 1 and wait for a call from the transit station. We have the following possible course of events: First call is in direction 1. We then wait a time interval, which is exponentially distributed. The probability for this outcome is p = p(1)). The rst call is in direction 2, but the second call is in direction 1. We then wait an Erlang2 distributed time interval (addition of two exponentially distributed time intervals). The probability for this outcome is (1 p) p = p(2). . . .

kth call is in direction 1, after that all k 1 rst events were in direction 2. We wait an Erlangk distributed time interval. The probability of this outcome is (1 p)k1 p = p(k). (Here p(k) is a geometric distribution). . . . The Erlangk distribution has the density function (3.34): gk (t) = (t)k1 et . (k 1)!

Therefore, the density function of the time until rst event becomes: f (t) =
i=1 i=1

gi (t) p(i) (t)i1 et (1 p)i1 p (i 1)!


i=1

= p et

{t (1 p)}i1 (i 1)!

= (p) e(p)t , which just is the density function of an exponential distribution with intensity p. The arrival process to local exchange 1 is thus a Poisson process with intensity p. The interval representation thus shows that a weighted sum of Erlangk distributions becomes an exponential distribution if the Erlangk distribution has a weighting-factor equal to the kth term of a geometric series, and the summation is over all k 1. (Sec. 2.3.3).

422

INDEX

Extra: The same result can be obtained by using probability generating functions. The Erlang-k distribution has the Laplace-transform: (s) = +s
k

The waiting time until the rst event in direction 1 then becomes: (parallel combination of stochastic variables): 1 (s) =
i=1

+s

(1 p)i1 p .

This is an innite summation, where the terms make up a quotient series. The sum becomes: 1 (s) = p = 1 s + 1 (1 p)
s+

p , s + p

which is just the Laplace-transform of an exponential distribution with parameter p. The proof is of course carried through for direction 2 in a similar way. Question 3 : Intensity considerations The probability of getting an event in the main process within a short time interval of duration t is: t + o(t) . If we in the main process have an event, then the probability of observing this event in the sub process one is equal to p. The unconditional probability of getting an event in sub process 1 within a short time interval of duration t then becomes: p t + o(t) . The probability of getting more than one event in subprocess one within t is p o(t) = o(t). Thus the probability of no events in subprocess one becomes. 1 p t + o(t This shows us that we in direction 1 have a Poisson process with intensity p.

INDEX Question 4 :

423

If just every second event chooses direction one, then the distance (inter-arrival time) between events in both directions becomes Erlang2 distributed. Thus we dont have a Poisson process. Remark: From a physical point of view it is obvious that the addition and splitting theorems (Exercise 6.1) must be valid. The number of particles, which is counted by e.g. a GeigerMller meter, u is Poisson distributed. The space-angle the meter covers, depends on the distance of the meter to the radioactive source, but has no inuence upon the type of distribution for number of events. The cosmic background radiation also follows a Poisson process, and the splitting theorem shows that we just should deduct this from the total observed value. Processes which are caused by many independent factors, will converge to a Poisson process.
2009-02-16

424 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 4.1 ERLANGs BFORMULA At a shopping center there is a gambling hall. People passing by decide at random and independently of each other to enter and play, but if all gambling machines are occupied, then they pass on (alternativ formulation as an Internet cafe). During opening hours there are on the average 40 people per hour which enter to play. People choose the rst idle machine from the entrance and play on the average 6 minutes (exponentially distributed). A gambling machine make on the average an income equal to 100 re per minute it is used. The total expenses for rent of rooms and maintenance is 20 kr. per hour per machine, independent of whether it is used or not. Coins of an equivalent value equal to 100 re are used. (In the following Erlangs B-formula may be calculated using the recursion formula, tables or computers). 1. Calculate the oered trac. 2. What is the net income when the number of machines is 4? 3. Is it protable to have more or fewer machines than 4? What is the optimal number of machines? In the following we assume the number of machines is 4. 4. How many coins may the owner expect to collect from each machine after 12 hours of opening? 5. How long time does it on the average take before the last customer leaves after closing time, when there is 1, 2, 3 or 4 people playing at the closing time? 6. What is the proportion of time when just one machine (a random one) is idle? 7. What is the proportion of time the machine farthest away from the entrance is idle?

INDEX Technical University of Denmark DTUPhotonics, Networks group

425 Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 4.1 The Erlang Bformula is derived in the textbook in Sec. 4.3. Question 1: The oered trac is equal to the average number of calls per mean holding time (1.2): A = / . Four potential customers arrive per six minutes: A = 4 erlang .

Question 2: Using a table of Erlangs B-formula (see table in the collection of exercises) we nd for the trac carried: Y = A [1 E1,n (A)] = 4 [1 E1,4 (4)] = 4 [1 0.3107] , Y = 2.757 erlang.

The gross income is 60 kr. per erlanghour. Therefore, the net income becomes R = (2.757 60 4 20) kr/hour, R = 85.42 kr/hour . Question 3: We evaluate the net income for dierent number number of machines (see table in collection of exercises, or calculate the ErlangB formula using the recursion formula (4.29)):

426 Income per hour [kr.] Gross 48.00 92.31 131.83 165.42 192.22 211.88 Net 28.00 52.31 71.83 85.42 92.22 91.88

INDEX

Number of machines n 1 2 3 4 5 6

Carried trac Y 0.8000 1.5385 2.1972 2.7573 3.2037 3.5313

From the table we see that the optimal number of machines becomes 5. When adding one machine the income should increase by at least 20 kr./hour. This corresponds to that the carried trac should increase by at least 1/3 erlang when adding one machine (Moes principle). The optimal number of machines can thus be obtained directly from a table of the improvement function. Question 4: The carried trac on the individual machines becomes when using sequential hunting (4.16): For a period of 12 opening hours we get: ai = F1,i1 (A) = A [E1,i1 (A) E1,i (A)] . a1 = 0.8000 erlang 576 coins , a2 = 0.7385 erlang 532 coins , a3 = 0.6587 erlang 474 coins , Question 5: a4 = 0.5601 erlang 403 coins .

After closing time the arrival rate is is = 0. For the departure process we exploit that the exponential distribution has no memory: a) If only one customer is present at closing hour, then the time until this customer departs (independent of how time the customer has already played) is exponentially distributed with mean value 1/ = 6 minutes: w1 = 6 minutes .

INDEX

427

b) If two customers are present at closing time, then the rst one departs according to an exponential distribution with mean value (2)1 = 3 minutes. Subsequently, the last one departs as mentioned under a). The total mean value thus becomes: w2 = 9 minutes . c) If three customers are present, the rst one departs according to an exponential distribution with mean value (3)1 = 2 minutes. Afterwards the sequence is as described under b), and the total mean value becomes: w3 = 11 minutes . d) In similar way we derive the average waiting time for the case with four customers present at closing time: w4 = 12.5 minutes . Question 6: The probability of having just one idle machine is equal to the probability of having just three busy machines. From the truncated Poisson distribution (4.9) we get: A3 / P (3) = 3!
4

=0

A !

4 E1,4 (A) = 0.3107 . A

The trac ai carried by machine i at a given point of time is correlated with the trac carried by the other machines at the same time. The values given in Question 4 are mean values and cannot be used for solving this question. Question 7: The last machine is (of course) idle, when it is not working. From Question 4 we therefore get: P {last machine idle} = 1 a4 = 0.4399 .
Updated 2005-02-14

428 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 4.4

(exam 1980)

M/E2 /2 - LOSS SYSTEM We consider a loss system with two channels (servers). Call attempts arrive according to a Poisson process with intensity calls per time unit. The service time is Erlang-2 distributed with intensity 2 in each of the two phases. 1. Find the oered trac. 2. Construct a state transition diagram of the system, where a state both denotes the number of calls in the system and the phase of the calls. Apply the following states, where a and b denotes the two phases.
      

aa

 



ab

 

bb

3. Find under the assumption of statistical equilibrium the state probabilities of the system by exploiting the fact that the truncated Poisson distribution for a given mean holding time is valid for any service time distribution (insensitivity property). 4. The blocking state both channels occupied is initiated from either state a or b. Denote for both of these cases the (Cox)distribution of the duration of the blocking state by applying a graphical (phase diagram) representation of the Coxdistribution. 5. (Advanced, excluded) Write down the Laplace transform of the distribution of the duration of the periods when both channels are busy. 6. (Advanced) Find the mean value and variance of the number of call attempts which are blocked during a period when both channels are busy.

INDEX Technical University of Denmark DTUPhotonics, Networks group

429 Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 4.4 The service time is Erlang-2 distributed with intensity (rate) 2 in each phase. Therefore, the mean service time becomes: 1 1 1 + = s= 2 2 Question 1: The oered trac is equal to the average number of call attempts per mean service time: A = /

Question 2: The given states dene the state transition diagram in a unique way:
B   

aa

a 4 B i 2 c  
B c    r r r 2 r b 2 c   r rr 4 r bb 

ab

Question 3: As the truncated Poisson distribution is valid, we have the following state probabilities: p(0) = 1 1 + A + A2 /2

p(1) = p(a) + p(b) = A p(0) p(2) = p(aa) + p(ab) + p(bb) = A2 p(0) 2

430 For node 0 we have the following node balance equation: 2 p(b) = p(0) p(b) = and thus from the equation for p(1): p(a) = A p(0) 2 A p(0) 2

INDEX

The node balance equation for state aa is a follows: 4 p(aa) = p(a) p(aa) = In a similar way we get for state ab: 4 p(aa) + p(b) = (2 + 2) p(ab) p(ab) = and for state bb: 4 p(bb) = 2 p(ab) p(bb) = A2 p(0) 8 A2 p(0) 4 A2 p(0) 8

We notice that the two phases are symmetric. This is not obvious from the beginning. The state ab is in fact composed of the two micro-states ab and ba. Therefore, p(ab) is twice as big as p(aa) and p(bb). Summarising, we have:: p(0) = 1 1+A+ A/2 1+A+
A2 2

p(a) = p(b) =

A2 2

p(aa) = p(bb) =

1 A2 /8 p(ab) = 2 1+A+

A2 2

INDEX

431

There will be (n + 1) dierent (macro-)states with all n channels busy. The macro-state (i, n i) = i channels in phase a and (n i) servers in phase b is made up from n i dierent micro-states The probability of nding the system in state (i, n i) thus becomes: P (i, n i) = n i
n

j=0

n j

p(n)

n i p(n) 2n

where p(n) is the truncated Poisson distribution(4.9). In a similar way we can derive the state probabilities for all Question 4: Starting in state aa: Starting in state ab. This is a subset of the rst case: aa
E
n+2 2

states.

ab
E

1/2 E 1/2

bb 4
c

ab
E

1/2 E 1/2

bb 4
c

432 Question 5:

INDEX

The Laplace-transform of the two distributions obtained in Question 4 can immediately be written down from the phase diagrams: Laa (s) = 1 2 4 s + 4 4 s + 4
3

+
2

1 2 1 2

4 s + 4 4 s + 4

1 Lab (s) = 2

The number of observations per time unit when starting in state aa (see the state transition diagram in Question 2): p(a). The number of observations per time unit when starting in state ab: p(b) . As we have p(a) = p(b), the average number of observations of the two distributions is the same, and the Laplace transform of the distribution of the duration of the periods when both channels are busy becomes: L(s) = L(s) = 1 {Laa (s) + Lab (s)} 2 1 4 4 s + 4 + 1 2 4 s + 4
2

1 4

4 s + 4

In fact this corresponds to the following Coxdistribution, which is a weighted sum of the two Coxdistribution in Question 4. But we can no longer identify the individual states as they are mixed together. From the Laplace-transform it is easy to write down the distribution
B   

aa

B i 2 c   B c    r r r 2 r b 2 c   r rr 4 r bb 

ab

functions for the cases we consider here. Question 6:

The number of blocked call attempts during a busy period has the following mean value and

INDEX variance (2.82) & (2.84): m1,n = m1,x

433

2 where m1,x and x are the mean value and variance of the distribution in Question 5. They can be derived by dierentiating L(s). As we consider Erlang-k distributions in parallel they can also be obtained directly: 1 1 1 2 1 3 m1,x = + + 4 4 2 4 4 4

2 2 n = 2 x + m1,x

1 2

Second moment of an Erlang-k distribution (which is not normalised) is: 1 k2 + k 2 From this we get: m2 = 1 4 1 4
2

1 1 1 2 + 6 + 12 4 2 4 13 2

2 x = m2 m2 1,x

5 2

1 4

This is of course in agreement with (2.89) and (2.90), as q1 = 1, q2 = 3/4, q3 = 1/4, a0 = 1, a1 = 3/4, a2 = 1/3, a3 = 0 and j = 4 for all j. Therefore we have: m1,n = mn =
2 n

1 2

A 2
2

5 = 2

1 4

1 2

2 n =

5 A A2 + 32 2

434

INDEX

2 If the busy periods were constant time intervals, then n would only contribute with A/2, due to the Poisson process. Because the busy periods are stochastic time intervals we also get a contribution 5 A2 /32.

If we look for the variance of the number of call attempts blocked during e.g. a busy hour, then we get an additional contribution to the variance because the number of busy periods also is a random variable itself.
Updated: 2004-02-29

INDEX Technical University of Denmark DTUPhotonics, Networks group

435 Teletrac Engineering & Network Planning Course 34 340

Exercise 4.10

(Exam 1999)

ERLANGS LOSS SYSTEM We consider a loss system, which has 4 channels and is oered PCT1 trac. The arrival rate (intensity) is = 1 call per time unit, and the mean service time is 1 = 2 time units. The system is assumed to be in statistical equilibrium. 1. Find the oered trac, and set up the state transition diagram of the system. 2. Find the state probabilities, and nd the time congestion, the call congestion, and the trac congestion. 3. Calculate the time congestion using the recursive formula for Erlangs B-formula. The individual steps of the recursions should appear in the answer. 4. Assume random hunting, and nd the probability that two specic channels are busy (the remaining channels may be busy or idle). 5. How many channels do we need, if the system is dimensioned with an improvement value equal to 0.20? (Apply the results from Question 3). 6. Find the distribution of the number of calls, which are lost during a period, when all 4 channels are busy.

436 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 4.10

(Exam 1999)

The system considered corresponds to Erlangs loss system, which is dealt with in Chap. 4. Question 1: The oered trac A is: A = / = 1 2 , A = 2 [erlang] . The state transition diagram becomes (cf. Fig. 4.2):
     j j j j

    

1 2

2 2

3 2

4 2

Question 2: If we denote the relative state probabilities by q(i) and the absolute state probabilities by p(i), then we get:

q(0) = 1 q(1) = 2 q(2) = 2 q(3) = q(4) =


4 3 2 3

p(0) = p(1) = p(2) = p(3) = p(4) =

3 21 6 21 6 21 4 21 2 21

= 0.1429 = 0.2857 = 0.2857 = 0.1905 = 0.0952

Total = 7

Total = 1 = 1.0000

We would of course obtain the same state probabilities by inserting the actual parameters into the truncated Poisson distribution (4.9). The time congestion E becomes: E = p(4) = 2 . 21

INDEX

437

As the arrival process is a Poisson process, the PASTAproperty is valid, and time congestion, call congestion and trac congestion are all identical (Sec. 4.3.2): E=B=C= 2 . 21

We may of course calculate B and C explicitly, but at exam this would be waste of time. Question 3: By applying the recursion formula for calculating Erlangs B-formula (4.29): Ex (A) = A Ex1 (A) , x + A Ex1 (A) E0 (A) = 1 , x = 1, 2, . . . ,

we nd, letting A = 2 [erlang]:

x = 1: x = 2: x = 3: x = 4: x = 5:

E1 (2) = E2 (2) = E3 (2) = E4 (2) = E5 (2) =

21 1+21 2 2 3 2 2+2 3 2 2 5 2 3+2 5 4 2 19 4 4 + 2 19 2 2 21 2 5 + 2 21

= = = = =

2 , 3 2 , 5 4 , 19 2 , 21 4 , 109

where we for later use in Question 5 also calculate the blocking probability for 5 channels. The blocking probability for 4 channels corresponds of course to the result obtained in Question 2. As a control we may also verify that the values are in agreement with the table of Erlangs B-formula in the collection of exercises. Question 4: We may ourselves by elementary probability theory carry through the derivations which are behind Palm-Jacobus formula. If two arbitrary channels are occupied, then the probability that it is just our two channels will be equal to 1/6 (the number of dierent ways we may choose 2 out of 4 channels). If three channels are busy, then the probability that our two channels are among these will be 1/2. If all four channels are busy, then our two channels

438 will always be busy. Therefore, we get: H(2) = = = 1 1 p(2) + p(3) + p(4) 6 2 1 6 1 4 2 + + 6 21 2 21 21 5 . 21

INDEX

It was a general error at the exam to assume independence between channels and use the probability that a random channels is busy. Question 5: The trac lost A , the trac carried Y , and the trac carried additionally Fn (A), when the number of channels are increased from n to n + 1 (which is equal to the trac an+1 carried by channel n+1 in a system with sequential hunting) becomes:

n 0 1 2 3 4 5 As:

Al 2.0000 1.3333 0.8000 0.4211 0.1905 0.0734

Y 0.0000 0.6667 1.2000 1.5789 1.8095 1.9266

Fn (A) = an+1 0.6667 0.5333 0.3789 0.2306 0.1171

F1,n1 (A) > FB = 0.20 F1,n (A) , we notice that we need n = 4 channels, because we choose an integral number of channels, which is on the safe side (corresponding to an improvement value less than or equal to the dimensioning criteria). The same result may of course be obtained by using the table of the Improvement function of Erlangs B-formula, given in the collection of exercises. Fig. 4.5 gives the same result for the curve A = 2, but it is more dicult to read accurately. Question 6: The duration of state all channels busy is exponentially distributed with the intensity: 4 = 2 [calls/time unit]. New call attempts arrive according to a Poisson process with rate (intensity) = 1 [calls/time unit]. If we are in state 4 channels busy (busy period), then the next event is either: a call attempt which is blocked or

INDEX termination of the busy period.

439

From Sec. 2.2.7 (minimum of k exponentially distributed stochastic variables) we know the probability of next event: p(call attempt) = p(call terminate) = 1 = , + 4 3 4 2 = . + 4 3

Because of the exponentially distributed time intervals the process is a Markov process without memory, and the above probabilities are thus independent of the number of calls already blocked. The probability that i call attempts are blocked during a busy period therefore becomes geometrically distributed: p(i) = 1 3 i 2 , 3 i = 0, 1, 2 .

Comments: This version of the geometric distribution begins with the value zero and thus the mean value and variance becomes (cf. the text of Table. 3.1): m1 = 2 = 1 1 1= , 2/3 2 3 1 2/3 = . 2 (2/3) 4

On the average we stay in state [4] half a time unit. On the average one call arrives per time unit. Therefore, the mean value 1 is correct. 2 Alternative solution 1: Direct calculation The distribution of the number of calls during a busy period with all four channels occupied can also be derived directly. The busy period has the density distribution: f (t) dt = n ent dt = 2 e2t dt . If the busy period has a duration inside the interval (t, t+dt), then the number of calls during this (constant) time interval is Poisson distributed: p(i | t) = = (t)i t e i! ti t e , i! i = 0, 1, .

440 The unconditional distribution for the number of call attempts then becomes: p(i) =
t=0 t=0

INDEX

p(i | t) f (t) dt ti t e 2 e2t dt i!


t=0 t=0

2 i!

ti e3t dt (3t)i e3t d(3t)

= = =

2 1 i+1 i! 3

2 1 i+1 i ! i! 3 2 3 i+1 , i = 0, 1, . . . q.e.d.

where be exploit the denition of the gamma function.


Revised 2008-03-05

INDEX

441

Alternative solution to Question 4: The probability that k specic channels (our channels) are busy is given by Palm-Jacobus formula (Sec. ??). For Erlangs loss system we use (??): H(k) = En (A) , Enk (A) k = 1, 2, . . . n .

For n = 4 and k = 2 we nd using the values from Question 3: H(2) = 2/21 , 2/5 5 21 q.e.d.

H(2) =

Alternative solution 2 to question 6: Generating functions By applying the theory for the number of events in a Poisson process during a stochastic time interval (Sec. ??), we nd the Z-transform of the distribution of the number of events during a busy period (??): Zn (z) = ((1z)) . For the exponential distribution we have the Laplace-transform (??): (s) = and thus: Zn (z) = which may be written as: 4 2 = , 4+s 2+s

2 2 = , 2 + (1 z) 2 + 1 (1 z) Zn (z) =
2 3

This is the Z-transform of the geometric distribution, derived above. This is seen as follows: Z = =
i=0

1 (1 2 ) z 3

(1 )i z i

, 1 (1 ) z

2 which corresponds to the above with = 3 .

442 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 4.11

(Exam 2006)

Erlangs Loss system with ordered hunting We consider Erlangs loss system with n = 3 channels. The arrival rate is = 2 calls per time unit, and the mean holding time is 1 = 1/2 time unit. 1. Find the oered trac.

2. Construct the state transition diagram and nd the state probabilities under the assumption of statistical equilibrium 3. Assume sequential (ordered) hunting of idle channels and nd the trac carried by each channel (the improvement function), using the recursion formula for the Erlang-B formula.

Denote the three channels by a, b, c (order of hunting). 4. Set up a state transition diagram which keeps record of the state of each channel, where the state is dened by the busy channels as shown in the gure below.
           

ab

ac

abc

   

bc

5. Find the remaining state probabilities using the results above and the following state

INDEX probabilities: p(b) = 19 240 6 240 7 240 5 240

443

p(c) =

p(ac) =

p(bc) =

6. Find the trac carried by each channel expressed by the state probabilities. What is the proportion of time channel a is busy and the other channels b and c are idle?

444 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 4.11

Exam 2006

Question 1: By denition the oered trac is the average number of calls per mean service time:

A=

2 = = 1 [erlang] 2

Question 2: The state transition diagram becomes as follows:


 j  j  j 

   

The relative state probabilities q(i) = p(i)/p(0), respectively the absolute state probabilities p(i) become as follows:

q(0) = 1 q(1) = 1 q(2) = q(3) =


1 2 1 6

p(0) = p(1) = p(2) = p(3) =

6 16 6 16 3 16 1 16

Question 3: Applying the recursion formula for Erlang-B (4.27) and using the formula (4.14) for carried

INDEX trac per channel in a system with ordered hunting we get: E0 = 1 E1 = A E0 11 1 = = 1 + A E0 1+11 2
1 2

445

E2

1 1 2 A E1 = = 2 + A E1 2+1 1 1 5 A E2 = 3 + A E2 3+1

1 5 1 16

E3 =

1 5

a1 = A (E0 E1 ) = a2 = A (E1 E2 ) = a3 = A (E2 E3 ) = The total carried trac becomes:

1 [erlang] 2 1 1 2 5 1 1 5 16 = 3 [erlang] 10 11 [erlang] 80

Y = a1 + a2 + a3 = which is in agreement with:

15 225 = , 240 16 1 16 = 15 . 16

Y = A {1 p(3)} = 1 1 Question 4:

The state transition diagram becomes as shown in the following gure. All arrivals (arrows to the right) have the rate = 2 and all departures (arrows to the left) have the rate = 2.
 E ' ab Q a   k Q   k                 s     C   E C  ac ' 0 ' abc b Q  Q    k k                          C  C  c ' bc   

446 Question 5:

INDEX

From Question 2 we have the global state probabilities p(i), which are independent of the order of hunting, and also may be obtained by aggregating the states in Question 4 (we use the denominator 240 to get integer values. The missing state probabilities may be obtained without using ow balance equations (independent of the state transition diagram):

p(0) =

3 6 = 16 8 90 . 240

p(0) =

p(1) = p(a) + p(b) + p(c) 19 6 6 = p(a) + + 16 240 240 p(a) = 65 . 240

p(2) = p(ab) + p(ac) + p(bc) 3 45 7 5 = = p(ab) + + 16 240 240 240 p(ab) = 33 . 240 1 16

p(abc) = p(3) = 15 . 240

p(abc) =

We may control the result by node balance equations which must be fullled. Question 6: We known that a channel carries one erlang when it is busy. Leaving out the factor one, we

INDEX get when denoting the carried trac carried on channel x by ax : aa = p(a) + p(ab) + p(ac) + p(abc) = (65 + 33 + 7 + 15)/240 aa = 1 120 = . 240 2

447

ab = p(b) + p(ab) + p(bc) + p(abc) = (19 + 33 + 5 + 15)/240 ab = 72 3 = . 240 10

ac = p(c) + p(ac) + p(bc) + p(abc) = (6 + 7 + 5 + 15)/240 ac = 33 11 = . 240 80

This is in agreement with the values obtained in Question 3: aa = a1 , ab = a2 , ac = a3 , and also with the total carried trac Y = aa + ab + ac = 15/16. The proportion of time channel a is busy and channels b and c are idle is: p(a) =
Updated 2006-06-30

13 65 = . 240 48

448 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 4.12

(Exam 2007)

Erlangs Loss system We consider Erlangs loss system with n = 4 channels. The oered trac is 3 erlang, and the mean holding time is chosen as one time unit. 1. Construct the state transition diagram and nd the state probabilities. 2. Find the blocking probability for a random call attempt by using the recursion formula for Erlang-B formula (show the recursions). 3. Given a call attempt has been blocked, what is the probability that the next call attempt also is blocked. 4. Find the distribution of number of calls blocked during a period when all channels are busy. 5. Find the proportion of time the rst channel is idle and the other three channels are busy, under the assumption of random hunting. The following question was not included at exam: 6. Find the proportion of time the rst channel is idle and (at the same time) the other three channels are busy under the assumption of ordered (sequential) hunting.

INDEX Technical University of Denmark DTUPhotonics, Networks group

449 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 4.12 Question 1:

(exam 2007)

The state transition diagram of the system becomes as follows:

     j j j j

    

The state probabilities are:

p(0) = p(1) = p(2) = p(3) = p(4) =

8 , 131 24 , 131 36 , 131 36 , 131 27 . 131

Question 2: Using the recursion formula (4.29)

Ex (A) =

A Ex1 (A) , x + A Ex1 (A)

E0 (A) = 1 ,

450 we nd for A = 3: E0 (3) = 1 , E1 (3) = 3 31 = , 1+31 4


3 4

INDEX

3 3 4 E2 (3) = 2+3 E3 (3) =

9 , 17

9 3 17 9 , 9 = 26 3 + 3 17 9 3 26 27 , 9 = 131 4 + 3 26

E4 (3) =

which is in agreement with question 1. Question 3: If a call has been blocked, then we know that the system is in state [ 4 ]. Next call attempt will be blocked if it arrives before next departure. As the arrival rate is = 3 and the departure rate in state [ 4 ] is 4 , then the probability of blocking next call attempt becomes: p= 3 = . + 4 7

Question 4: Every time a call attempts has been blocked the system is in the same state (no memory), so the number of blocked call attempts during a busy period becomes a geometrical distribution with mean value m1 = 7/4 = 1.75: p(i) = + 4 3 7 i 4 , 7 i 4 + 4

= Question 5:

i = 0, 1, 2, . . . .

When the system is in state [ 3 ] three channels are busy and one channel is idle. When we have random hunting the idle channel is a random channel, so the probability that rst

INDEX channel is idle becomes: p= Question 6: (Not included at exam)

451 p(3) 9 = . 4 131

When we have ordered (sequential) hunting the state rst channel idle and three other channels busy can only arise by going through the state all channels busy followed by rst channel becoming idle. State [3] may arise in two ways: a: by jumping from state 4 to state 3 with intensity 4 p(4). b: by jumping from state 2 to state 3 with intensity p(2). Knowing that we are in state 3, the conditional probability of arriving from state 4 thus becomes: 4 p(4) . 4 P (4) + p(2) Only every fourth time is will be channel number one which rst becomes idle. As the probability of being i state 3 is p(3), we nd the unconditional probability as p{1 idle | 2 4 busy} = 4 p(4) 1 p(3) 4 4 p(4) + p(2)
27 4 131 1 27 4 4 131 + 3

Inserting the probabilities obtained in Question 1 we get: p{1 idle | 2 4 busy} = = Alternative method of solution:

36 131

36 131

9 . 262

If we split state [3] up into two states, corresponding to channel one is idle or busy, then we get the following state transition diagram: The node equation for state (0, 3) becomes: ( + 3) p(0, 3) = p(4) p(0, 3) = = p(4) + 3 1 p(4) A+3 q.e.d.

452

3
   c C C

  w f

INDEX

0,3



   Q Q w f

 3 1,2 W 

X 4 

Updated: 2009-03-04

INDEX Technical University of Denmark DTUPhotonics, Networks group

453 Teletrac Engineering & Network Planning Course 34 340

Exercise 5.7

(exam 1997)

ENGSETS SYSTEM WITH HYPER-EXPONENTIAL HOLDING TIMES We consider Engsets loss system with S = 5 sources. The holding times are hyper-exponentially distributed with the following parameters: With probability 1 the holding time is exponentially distributed with mean value 2 3 (intensity 1/2) (phase a).
2 With probability 3 the holding time is exponentially distributed with mean value (intensity 2) (phase b). 1 2

1 3

1 2

E
2 3

1. Find the mean value m1 and the form factor of the holding time distribution. An idle source generates one call attempt per time unit. 2. Find the oered trac per idle source , and the total oered trac A. The above-mentioned trac is oered to a full accessible group with n = 3 channels. The state of the system is given by (i, j), where i denotes the number of busy servers in phase a, and j denotes the number of busy servers in phase b. (Notice that the number of idle sources is S ij). 3. Construct the state transition diagram, where we now have the states shown in the following gure.

454

  T

INDEX

03

 c

02 '  T
 c

 E  T

12

01 '  T
 c

 c E

11 '  T

 E  T

21

00 ' 

 c E

10 ' 

 c E

20 ' 

 E 

30

4. Show that the state transition diagram is reversible. 5. Find the relative state probabilities expressed by state p(0, 0), and nd then by normalisation the absolute state probabilities. 6. Find the aggregated state probabilities p(x), which indicates that a total of x (x = 0, 1, 2, 3) channels are busy (x = i + j), and show that we nd the same state probabilities, when the holding times are exponentially distributed with the same mean value (Engsets system is insensitive to the holding time distribution). The following questions were not part of exam. 7. Has the system product form? 8. Find for both trac streams the time congestion E, the call congestion B, and the trac congestion C from the two-dimensional state transition diagram.

INDEX Technical University of Denmark DTUPhotonics, Networks group

455 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 5.7: Question 1:

(exam 1997)

The values asked for are obtained from (2.67), respectively (2.68): m1 = 1 2 1 2 + = 1, 3 3 2 1 1 2 1 + 2 2 3 (1/2) 3 2 12 = 3 .

= 2

We may also use the formul for combination of stochastic variables in parallel (2.58):
l

m =
i=1

pi m,i .

The second moment of an exponential distribution with intensity is 2/2 (??), and we nd: m2 = 1 2 2 2 + 2 3 (1/2)2 3 2

= 3. As = m2 /m2 we of course get the same as above. Question 2: From Sec. 5.2.2, formul (5.9), (5.10) and (5.11), we get: Oered trac per idle source: = m1 = 1 erlang , Oered trac per source: Total oered trac: Question 3: Question 4: (Based on the theory from Sec. 7.2) We nd that the circulation ow in all three squares is zero, and therefore the process is = /(1 + ) = 1/2 erlang , A = S = 5 1/2 = 2.5 erlang .

456
03 2 02 1/2 8/3 01 10/3 00 1/2 1/2 2 8/3 5/3 10 1 4 4/3 11 1 2 4/3 20 3/2 2 2 1 30 2 4 1 21 6 1 12

INDEX

reversible: Square (1,1): Square (2,1): Square (1,2): Question 5: Relative state probabilities: 3 2 1 0
10 27 10 9 5 3 20 9 40 9 10 3 40 9 40 9 80 27

5 8 1 10 4 1 2 = 2 , 3 3 2 3 3 2 8 4 212 = 1 2 1, 3 3 4 1 8 1 2 4 = 14 . 3 2 3 2

3 10 2 30 1 45 0 27 0 60 120 90 1 120 120 2 80 3

1 0

= 702

INDEX The absolute state probabilities become: 10 702 30 702 45 702 27 702 0 27 1 = , 702 26 90 , 702

457

3 2 1 0

60 702 120 702 90 702 1

120 702 120 702 2

80 702 3

i.e.

p00 =

p10 =

p20 =

120 , 702

etc.

Question 6:
5 0 1 1 2 4 2 3 3 3

For the Engset case with 5 sources, 3 channels, = 1 and average holding time m1 = 1 we nd: q0 = 1, q1 = 5 , q2 = 10, q3 = 10 , 5 10 10 1 , p1 = , p2 = , p3 = . 26 26 26 26 From the two-dimensional system with hyper-exponential holding times we nd the following global (aggregated) state probabilities: p0 = p(0) = p00 = 1 = 0.0385 , 26 135 5 = = 0.1923 , 702 26 270 10 = = 0.3846 , 702 26 270 10 = = 0.3846 , 702 26

p(1) = p01 + p10 =

p(2) = p20 + p11 + p02 =

p(3) = p30 + p21 + p12 + p03 =

which is the same as for the one-dimensional system. We have thus shown that Engsets model is valid for both exponential and hyper-exponential holding times. In fact, it is valid for any holding time distribution (insensitivity).

458 Question 7:

INDEX

By calculating the marginal distributions p(i, ) and p(, j) we notice that there is no product form. Product form requires that: p(i, j) = p(i, ) p(, j) , apart from a constant normalisation factor. Product form requires e.g. that the relative ratio between p(0, j) and p(1, j) is the same for all j (all rows). This is not fullled in our case. We may thus have a process, which is reversible (has local balance) and insensitive, without having product form. Question 8: Of course, we may also nd time congestion E, call congestion B, and trac congestion C from the two-dimensional state transition diagram directly from the denitions. We nd the same results as for the one-dimensional Engset case.
Updated: 2008-03-13

INDEX Technical University of Denmark DTUPhotonics, Networks group

459 Teletrac Engineering & Network Planning Course 34 340

Exercise 5.9

(Exam 2000)

ENGSETS LOSS SYSTEM

We consider Engsets loss system with 3 servers, which are oered trac from 4 homogeneous sources. An idle source generates calls with the intensity = 1/2 [calls/time unit], and the service time is exponentially distributed with the mean value 1 = 1 [time unit]. 1. Find the total oered trac from the 4 sources. 2. Set up the state transition diagram, and nd the state probabilities under the assumption of statistical equilibrium. 3. Find the time congestion, call congestion, and trac congestion, using results from Question 1 and 2. 4. Find the distribution (density function) of the number of calls, which are blocked during a period where all three servers are busy. 5. Derive the state probabilities of the system by convolving the state probabilities of 4 single sources and truncate the state probabilities at state 3.

460 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 5.9 Question 1:

(Exam 2000)

When there is no blocking, a single source has the following time-dependent evolution: The
On ..............
. ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . ........................................................................................................................................................................................ . .......................................................................................................................................................................................... .. ... .... ................... .................. .. .

Time

. ....................................................... .............. ........... ............................. ... . .... ... .. . .. .. .

..................................... .................................... .. . .. .. ..

state probabilities of a single source becomes: p(0) = p(1) = Oered trac per idle source (5.9): 1 . 2 Oered trac per source (= carried trac in a system with no blocking (5.10): = / = = 1 = . 1+ 3 2 , 3 1 . 3

The total carried trac of four sources becomes (5.11): 4 A= [erlang] . 3 Question 2: The resulting state transition diagram is shown in the following gure: The relative state probabilities q(i), respectively the absolute state probabilities p(i), becomes: q(0) = 1 q(1) = 2 q(2) = q(3) =
3 2 1 2

p(0) = p(1) = p(2) = p(3) =

2 10 4 10 3 10 1 10

INDEX
#
4 2

461
# j
3 2

"! "! "! "!

# j

2 2

# j

Question 3: The time congestion E is the proportion of time all three channels are busy: 1 . 10 The call congestion B is the proportion of call attempts which are blocked. If we denote the arrival intensity (rate) in state i by i , then we have: E = p(3) = B = 3 p(3) 0 p(0) + 1 p(1) + 2 p(2) + 3 p(3)
4 2

2 10

+3 2

1 2 4 10

1 10 +2 2

3 10

+1 2

1 10

1 = 0.0370 . 27 The trac congestion C is the proportion of the oered trac which is blocked: B = C= where Y is the carried trac:
3

AY , A

=
i=0

i p(i) 4 3 1 +2 +3 10 10 10

= 1 =

13 . 10
4 3

C =

4 3

13 10

C = C may also be obtained from (5.34): C=

1 = 0.0250 . 40

Sn 43 1 1 E = = S 4 10 40

q.e.d.

462 Question 4:

INDEX

During a busy period we are in state 3. The next call attempt is blocked, if it arrives before an existing call terminates. 1 As in state 3 the arrival intensity is 2 and the departure intensity is 3, we get: p(blocking) = 1 p = 1p =
1 2 1 2

3+ 1 . 7

In a similar way the probability that the next call attempt is accepted becomes: p= 6 . 7

Due to the Markov property (exponentially distributed time intervals) the process has no memory, and after blocking a call attempt the system is still in the same state. Therefore, the number of blocked call attempts during a busy period becomes geometric distributed: p(i) = (1 )i = 1 7
i

6 , 7

i = 0, 1, 2 . . . .

The distribution starts in i = 0 and thus has the mean value (cf. the text of Tab. 3.1): 1 7 1 1= 1= . p 6 6 Question 5: From the state probabilities of a single source, which we obtained in Question 1 we get:

# sources 0 1 2 3 4

1
2 3 1 3

1
2 3 1 3

2
4 9 4 9 1 9

1
2 3 1 3

3
8 27 12 27 6 27 1 27

1
2 3 1 3

4
16 81 32 81 24 81 8 81 1 81

INDEX

463

Note that the call congestion for 4 sources in Question 2 is equal to the time congestion with 1 3 sources ( 27 ). We truncate the state probabilities at 3 channels and get:

State Unnormalised Normalised 0 1 2 3


16 81 32 81 24 81 8 81 16 80 32 80 24 80 8 80

= = = =

2 10 4 10 3 10 1 10

This is of course the same as we found in Question 2.


Updated 2005-02-23

464 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 5.10

(exam 2001)

LOSS SYSTEM WITH STATEDEPENDENT ARRIVAL INTENSITY We consider a loss system with n = 2 channels. The state of the system i is dened as the number of busy channels. Customers arrive according to a statedependent Poisson process with intensity 3i [customers per time unit] , 0 i 3 . (i) = 4i For all other states (i) = 0. We choose = 1 customer per time unit, and the service time is exponentially distributed with intensity = 1 customer per time unit. 1. Construct the state transition diagram of the system. 2. Find the state probabilities of the system under the assumption of statistical equilibrium, and give the time congestion E. 3. Find the state probabilities (i) as they are observed by an arbitrary arriving customer, and nd the call congestion B. 4. Find the oered trac, which is dened as the trac carried in a system without blocking, and nd the trac congestion C. 5. Assume that both channels are busy. What is the probability that the next event is a call attempt (which of course will be blocked)? Find the distribution of the number of calls, which are blocked during a busy period. 6. Give the state probabilities as they are seen by a customer, which just has departed from the system. We include customers, which are blocked.

INDEX Technical University of Denmark DTUPhotonics, Networks group

465 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 5.10 Question 1:

(exam 2001)

With the given arrival rates (= intensities) we get the following state transition diagram (the closed loop in state 2 is usually not included):

............. .............. .............. ............... . ............... . ...................... ....... . ..... ........... ......... .... ...... .... ............ ......... .... ............ ......... ..... ............ . ..... .................... ..... ..... ...... ...... ...... ............... ... ...... .. .... .... .... . . ... .. .. .. . .. .. ... . .. .. . .. .. .. .. .. .. .. .. .. .. .. . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . .. . .. .. . .. .. .. . . .. .. .. .. ... .. ... .. .. .... .. ... .. .. . .. . . ... .. .. ... ... . ... ... ... ..... ............ .................. ....... .. . .................... .... . . .. .... ............. ....... .... .. .... ............ ........ ... .... ............ ......... .. ..... ...................... .................... ................... .................. ................. ...............

3 4

2 3

1 2

We notice that (3) is also given, but it has no inuence upon the state transition diagram because it is zero: (3) = 0. Question 2: Denoting the non-normalized state probabilities by qi and the normalized state probabilities by pi we nd:

q0 = 1 q1 = q2 =
3 4 1 4

p0 = p1 = p2 =

4 8 3 8 1 8

, , ,

Sum = 2

Sum = 1 .

The time congestion is the proportion of time all channels are busy: E = p2 = Question 3: The number of customers arriving when the system is in a given state i is both proportional with the state probability pi and the arrival rate i . Per time unit so many customers arrive in the dierent states: 1 . 8

466

INDEX

State 0: State 1: State 2: Total:

3 4 2 3 1 2

p0 = p1 = p2 = =

6 16 4 16 1 16 11 16

, , . .

The above is the average number of calls arriving during one time unit. Thus we get the following call-average arrival state probabilities: 0 = 6 , 11 1 = 4 , 11 2 = 1 . 11

The call congestion is the proportion of all call attempts blocked: B = 2 = 1 . 11

Question 4: The oered trac is dened as the trac carried in a system without blocking. If we have three channels, then no calls are blocked ((3) = 0). We get the following state transition diagram: Extending the results in question 2 with one state more, we nd:
..... ..... ...... ..... ..... ...... ..... .......... .......... ........ ........... .......... ........... ........... .......... ........... ........... .......... ....... .... ....... ..... ......... . .. ............ ....... ... .................... ...... . ...... . ... ........ ........ ........ ............... .. .... ... ........ ............... .. .... ... .. . ... .. .. .. .. .. .. . . .. ... . .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. . .. . . .. .. .. .. . .. . .. .. .. .. .. ... .. ... .. ... .. .. ... .. .. .. . .. . . ... . .. .. ... .. . ... ... ... ..... ............ .................. ....... .. . .................... . . .. .... .............. .... ............. ........ .... ............. .... ............ ........ ... .... ............ ......... .. ..... ...................... ................... .................. .................. ................. ...............

3 4

2 3

1 2

q0 = 1 q1 = q2 = q3 =
3 4 1 4 1 24

p0 = p1 = p2 = p3 =

24 49 18 49 6 49 1 49

, , , ,

Sum = 49 24 The oered trac then becomes:


3

Sum = 1 . 33 = 0.6735 . 49

A=
i=0

i p(i) =

INDEX The carried trac is obtained from the state probabilities in question 2:
2

467

Y =
i=0

i p(i) =

5 = 0.6250 . 8

Thus the trac congestion becomes: C=


33 49 33 49

5 8

19 = 0.0720 . 264

Question 5: In the state where both channels are occupied, the arrival rate is 2 = 1 and the departure 2 rate is 2 = 2. Thus the probability that the next event is an arrival becomes (2.42): p(arrival) = 2 2 + 2
1 2 1 2

+2

1 . 5

The call attempt is blocked and the state does not change. As the process is without memory, the number of blocked calls during a busy period becomes geometrically distributed: p(i) = 1 5
i

1 5

i = 0, 1, . . . .

This is a geometric distribution starting in i = 0. The mean value is (text to Tab. 3.1): m1 = 1 5 1 1= 1= . p 4 4

Question 6: We proceed in the same way as in question 3. Per time unit the number of customers departing from the dierent states observe the following state when they look back:

State 0: State 1: State 2:

1 p1 = 2 p2 =
1 2

p2 =

6 11 4 11 1 11

, , .

468

INDEX

Notice that we have two types of calls departing from state 2: served calls and blocked calls. The customers who see state 2 are the blocked customers. Thus we get the following callaverage departure state probabilities by normalizing by the total number of calls departing per time unit (which happens to be one): 0 = 6 , 11 1 = 4 , 11 2 = 1 . 11

These state probabilities are identical with the state probabilities observed by an arriving customer in question 3, because the process is reversible. The customers on the average see the same state, when they arrive, as they see when they depart.
Updated: 2009-03-24

INDEX Technical University of Denmark DTUPhotonics, Networks group

469 Teletrac Engineering & Network Planning Course 34 340

Exercise 5.11

(exam 2003)

Aloha model with Engset trac We consider an Engset model with S = 4 sources. The mean holding time is chosen as time unit (1 = 1). The arrival rate of an idle source is = 1/3. Both time intervals are exponentially distributed. The number of channels is innite, i.e. n S. The state of the system is dened as the number of busy channels. The above system is a model of a non-slotted Aloha system with S transmitters and exponentially distributed packet lengths. 1. Find the oered trac A. 2. Construct the state transition diagram and nd under the assumption of statistical equilibrium the state probabilities p(i), (i = 0, 1, . . . , 4). 3. Find the state probabilities (i), (i = 0, 1, . . . , 4), as they are observed by an arriving customer just before arrival (call averages). (Use either the state probabilities obtained in question 2 as starting point, or use the arrival theorem). 4. What is the probability that a call arriving in state zero (and thus changing the state of the system into state one) will complete service before next call arrives? This corresponds to a successful call transmission in the Aloha protocol. 5. What is the mean holding time of successfully transmitted calls?

470 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 5.11 Question 1:

(exam 2003)

The Engset case is dealt with in Chap. 5. The oered trac becomes (5.11): A = S=S where = 1 = , 3
1 3

, 1+

A = 4

1+

1 3

= 1 [erlang] . Alternative approach: If no calls are blocked, a source will change between being idle for three time units and busy for one time unit (mean values). In this case the carried trac is equal to the oered trac equal to 1/4 (the source is busy 25 % of the time. As we have 4 sources the total oered trac becomes 1 [erlang]. Question 2: We get the following state transition diagram (cf. Fig. 5.4): If we denote the non-normalised
 j  j  j  j 

4/3

3/3

2/3

1/3

    

state probabilities by qi and the normalised state probabilities by pi we nd: q0 = 1 q1 = q2 = q3 = q4 = Sum =


4 3 2 3 4 27 1 81 256 81

p0 = p1 = p2 = p3 = p4 =

81 256 108 256 54 256 12 256 1 256

, , , , ,

Sum = 1 .

INDEX Question 3: During one time unit the average number of customers arriving in each state is:

471

state [0] : State [1]: State [2]: State [3]: state [4] : Total :

4 81 3 256 3 108 3 256 2 54 3 256 1 12 3 256 0 1 3 256

= = = =

27 64 27 64 9 64 1 64

= 0 = 1

In general, the numbers do of course not add to one. It is number of calls and not probabilites. If we considered two time units, they would add to two. After normalising the number of calls arriving in each state by the total number of call arriving per time unit, we obtain the state probabilities observed by arriving calls: (0) = (1) = (2) = (3) = (4) = 27 , 64 27 , 64 9 , 64 1 , 64 0.

Alternatively we could use the Arrival Theorem saying that the call averages are equal to the time averages with one source less, i.e. the state probabilities of a system with S = 3 sources and n 3 channels. The non-normalised state probabilities qi and the normalised
 j  j  j 

3/3

2/3

1/3

   

472 state probabilities pi become: q0 = 1 q1 = 1 q2 = q3 = Sum =


1 3 1 27 64 27 27 64 27 64 9 64 1 64

INDEX

p0 = p1 = p2 = p3 =

, , , ,

Sum = 1 .

This is of course the same result as obtained above. Question 4: A customer arriving in state zero brings the system into state one. In state one new calls arrive with rate 3 = 1, and the holding time of the considered customer terminates with rate 1. The probability that the rst event is completion of the holding time becomes (Sec. 2.2.7): p{complete service before new arrival} = Question 5:
.. . .. ................................. ........................... ..... .. .. .... .. ...................................................................................................... ................... .................................................................................. . .. .................... ................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ... ... .. ... . ... . . .. .. .. . . .. .. . . ............................................................................................................................................................... . . .... . ............................................................................................................................................................. . . .. .

1 1 = . 1+1 2

1 2

Collission 1

Success

In Question 4 the next event took place after an exponential time interval with total rate equal to 2, i.e. successful calls terminates after an exponentially distributed time interval with rate 2 and therefore have the mean holding time: m1,success = 1 2 [time units] .

Unsuccessful calls have two phases. First they have an exponential time interval with mean 1/2 just as the successful calls. After collission they still have a remaining service time equal to one as the exponential service time has no memory. So the total mean value for unsuccessful calls become: 1 3 m1,collission = + 1 = [time units] . 2 2

INDEX Thus short calls are in general more successful than longer calls, which is quite obvious.

473

The global mean value for successful and unsuccessful calls becomes one, which is the mean service time for all calls. In the above gure we have a phase-diagram for the distribution of successful calls and unsuccessful calls. By comparison with Fig. 2.11 we see that this distribution is equivalent to one exponential distribution with mean value one.
Updated: 2007-03-13

474 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 5.12

(exam 2005)

Engset model with inhomogeneous sources We consider a full accessible Engset loss system with n = 3 channels. The system is oered trac from S = 4 sources. The arrival rate of an idle source is 1 = 1/2 call attempts per time unit. The mean holding time is chosen as time unit (1 = 1). All time intervals are 1 exponentially distributed. The state of the system is dened as the number of busy channels and every busy source occupies one channel. 1. Find the oered trac. 2. Find the state probabilities of the system by convolving the state probabilities of the 4 individual sources, truncating at 3 channels, and normalising. 3. Find the time congestion E, the call congestion B, and the trac congestion C. We now add a source dierent from the above sources having both mean idle time and mean holding time equal to one time unit (2 = 2 = 1). The source occupies one channel when it is busy. 4. Find by convolving this source with the above system the time congestion for both types of calls. 5. Find the call congestion for both types of calls by applying the arrival theorem. 6. Find the trac congestion for both types of sources by looking at the individual terms during the convolution (use eventually a two-dimensional state transition diagram as aid).

INDEX Technical University of Denmark DTUPhotonics, Networks group

475 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 5.12 Question 1:

(exam 2005)

If there is no congestion, then on the average a source is on one time unit and o two time units. So the oered trac per source is = 1/3 (5.10). The total oered trac becomes (5.11): 4 [erlang]. A=S= 3 Question 2: The state probabilities of a single source is (Binomial distribution for a single source = Bernoulli distribution): 2 1 p(0) = , p(1) = . 3 3 By using the relative state probabilities 2 : 1 we get by convolving 4 sources:

State 0 1 2 3 4

S1 2 1 0 0 0

S2 2 1 0 0 0

S12 4 4 1 0 0

S3 2 1 0 0 0

S123 8 12 6 1 0

S4 2 1 0 0 0

S1234 16 32 24 8 1

Truncated 16 32 24 8

Normalised
2 10 4 10 3 10 1 10

To the right we have the state probabilities. As a control we may of course use the formul in the textbook (5.26) or a state transition diagram:
#
4 2

"! "! "! "!

# j

3 2

# j

2 2

# j

We see that the above state probabilities fulll the node balance equations and add to one. Therefore, this is the unique solution.

476 Question 3: From the state probabilities we get: E = p(3) = 1 = 0.1000 . 10

INDEX

Using the arrival theorem, the call congestion is the time congestion with one source less, i.e. three sources. From the table for 3 sources (S123 ) we get for this inhomogeneous system: E= The carried trac is:
3

1 1 = = 0.0370 . 8 + 12 + 6 + 1 27

=
i=0

i p(i) 2 4 3 1 +1 +2 +3 10 10 10 10

= 0 =

13 . 10

As the oered trac is A = 4/3 (Question 1) we get: C = C = AY = A


4 3

4 3

13 10

1 = 0.0250 . 40

We may of course also use the formul of the textbook, e.g. (5.34): C= Question 4: Sn 43 1 1 E = = , S 4 10 40 q.e.d.

State S1234 0 1 2 3 2 4 3 1

S5 1 1 0 0 21 21 20 20 + + +

Convolution

S12345 =2 =6
2 19 6 19 7 19 4 19

Normalised = = = = 0.1052 0.3158 0.3684 0.2105

41 41 40

+ +

31 31

=7 + 11 =4

INDEX

477

We still consider a system with 3 channels. The time congestion for both types of sources (both requiring one channel) becomes: E1 = E2 = Question 5: The arrival theorem tells that the call congestion of a source is equal to the time congestion of the same system without this source. Thus we get from Question 3: 1 = 0.1000 . 10 The call congestion of type 1 is the time congestion of a system with 3 type 1 (S123 ) sources and the type 2 source (S5 ): B2 = State 0 1 2 3 S123 8 12 6 1 S5 1 1 0 0 S1235 8 20 18 7
8 53 20 53 18 53 7 53

4 = 0.2105 19

Normalised = = = = 0.1509 0.3774 0.3396 0.1321

Thus the call congestion of type 1 sources becomes: B1 = Question 6: From the convolution scheme in the table in Question 4 we see that all the terms in the rst column corresponds to zero erlang of type 1 sources, the terms in the second column corresponds to one erlang, the terms in the third column corresponds to 2 erlang, and the term in the last column corresponds to 3 erlang. Thus we get the carried trac, remembering that the normalisation factor is 1/19 and looking away from zero terms: Y1 = {1 (4 1 + 4 1) + 2 (3 1 + 3 1) + 3 (1 1)}/19 Y1 = 23 19
4 3

7 = 0.1321 . 53

As the oered trac is A1 =

we get the following trac congestion for sources of type 1: C1 =


4 3

4 3

23 19

C1 =

7 = 0.0921 76

478

INDEX

In a similar way we identify the terms contribution to trac of type 2 (all by 1 erlang) as terms on the second diagonal in the table of Question 4: Y2 = {2 1 + 4 1 + 3 1}/19 = 9 19 A2 Y2 = A2
1 2

C2 =

1 2

9 19

19 18 19

1 = 0.0526 . 19 This question may also be solved by considering the two-dimensional state transition diagram of two streams: one with the four identical sources (horizontally), and one with the additional source (vertically): C2 =
4 3    2 E 2 E 01 ' 11 ' 21    1 2 T T T

4     c 2 c 3 c 2 2 E 2 E E

We get the following relative state probabilities as the state transition diagram is reversible (Chapter 7): 9 10 19 2 2 4 4 4 8 3 3 6 1 1

00 ' 10 ' 20 ' 30     1 2 3

The normalisation constant is 19. From the marginal state probabilities we get the carried trac of each type: Y1 = (0 4 + 1 8 + 2 6 + 3 1) /19 = 23 , 19

Y2 = (0 10 + 1 9) /19 9 , 19 which are the values obtained above. =


Updated: 2006-02-27

INDEX Technical University of Denmark DTUPhotonics, Networks group

479 Teletrac Engineering & Network Planning Course 34 340

Exercise 5.13 Engsets loss system

(Exam 2008)

We consider Engsets loss system with n = 4 channels. The trac is generated by S = 6 sources. An idle source generates call attempts with intensity = 1 call attempt per time unit. The mean service time is 1 = 1 time unit. 1. Find the oered trac. 2. Construct the state transition diagram and nd the state probabilities, time congestion, trac congestion, and call congestion. 3. Find the time congestion by using the recursion formula for an increasing number of channels (show details of the recursions). 4. Given a call attempt has been blocked, what is the probability that the next call attempt also will be blocked. 5. Find the proportion of time the rst channel is idle and the other three channels are busy under the assumption of random hunting.

480 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 5.13 Question 1:

(exam 2008)

The oered trac is given by (5.9)(5.10)(5.11): = S a, A = Sa=S = 6 1 , 1+1 1+

A = 3 [erlang] Question 2: The state transition diagram of the system becomes as follows: The state probabilities are:
 j  j  j  j 

    

p(0) = p(1) = p(2) = p(3) = p(4) = Time congestion E: E = p(4) =

1 , 57 6 , 57 15 , 57 20 , 57 15 . 57

15 5 = = 0.2632 57 19

INDEX Trac congestion C: Carried trac:


4

481

=
i=1

i p(i) 1 6 15 20 15 +1 +2 +3 +4 57 57 57 57 57

= 0 =

52 156 = = 2.7368 57 19 3 52 AY 19 = A 3 5 = 0.0877 57

C = =

B =

(n) p(n) n i=0 (i) p(i) 2 p(4) 6 p(0) + 5 p(1) + 4 p(2) + 3 p(3) + 2 p(4)
6 57

30 57

30 57 60 57

60 57

30 57

5 = 0.1613 31

Alternatively or as a control we could also use (5.46) to nd C from E: C = = Sn 64 5 E = S 6 19 5 57 q.e.d.

and B can then be obtained from C by (5.49): B = = (1 + ) C 2C = 1+C 1+C 5 31 q.e.d.

482 Question 3:

INDEX

Using the recursion formula (5.52) for Engsets formula for increasing number of channels: Ex,S () = (S x+1) Ex1,S () 5.52 , x + (S x+1) Ex1,S () (7 x) Ex1 , x + (7 x) Ex1 E0,S () = 1 ,

we get for S = 6 and denoting Ex,S () by Ex : Ex =

E0 = 1

E0 = 1 E1 = 61 6 = 1+61 7 5 6 15 7 6 = 22 2+ 7

E2 =

E3

4 15 10 22 = 15 = 21 3 + 4 22 3 10 15 21 10 = 57 4 + 3 21

E4 = in agreement with earlier results. Question 4:

If a call has been blocked, then we know that the will be blocked if it arrives before next departure. and the departure rate in state [4] is 4 , then the becomes: (4) p= (4) + 4 Question 5:

system is in state [4]. Next call attempt As the arrival rate in state 4 is (4) = 2 probability of blocking next call attempt = 1 . 3

When the system is in state [3] three channels are busy and one channel is idle. When we have random hunting this is a random channel, so the probability that rst channel is idle becomes: p(3) 5 p= = . 4 57 Note: this exercise with Engset trac is similar to exercise 4.13 from exam 2007 which had Erlang- trac.
Updated: 2008-05-27

INDEX Technical University of Denmark DTUPhotonics, Networks group

483 Teletrac Engineering & Network Planning Course 34 340

Exercise 5.14

(Exam 2009)

Engsets loss system and insensitivity to idle times We consider Engsets loss system with n = 3 channels. The trac is generated by S = 5 sources. An idle source generates call attempts with intensity = 1 [call attempt/time unit]. The mean service time is 1 = 1 [time unit]. 1. Find the oered trac. 2. Construct the state transition diagram and nd the state probabilities, time congestion, trac congestion, and call congestion. We now want to indicate by an example that the above state probabilities are insensitive to the idle-time distribution. We assume that the idle-time distribution is Erlang-2 distributed (phase a and b) with the same rate 2 in both phases so that the mean-idle time still is one [time unit] as above. We dene the state of the system as (i, j, k), where i is the number of
E

2 phase a

2 phase b

idle sources in rst phase (a), j is the number of idle sources in second phase (b), and k is the number of busy sources (or channels). Note that i + j + k = 5 so that the state-transition diagram is only two-dimensional. 3. Fill in the transition rates in the two-dimensional state transition diagram given below. To be insensitive it can be shown that for a number of busy sources k, corresponding to a row in the state transition diagram, the distribution of the number of idle sources in phase a and b must be Binomial distributed so that p(i, j | k) = 5k i m1,a m1,a + m1,b
i

m1,b m1,a + m1,b

5ki

Inserting the actual values we get: p(i, j, k) = 5k i 1 2


5k

p(k) ,

k = 0, 1, 2, 3 ,

where p(k) are the state probabilities obtained above in Question 2.

484

INDEX

4. Find these state probabilities (express for example all state probabilities by the fraction x/832, then all values of x becomes integers and p(5, 0, 0) = 1/832. Show the state probabilities full the node balance equations by considering the node balance equations for state (1, 2, 2). 5. Find an expression for the call congestion B from the two-dimensional state probabilities. (As a control we get the same numerical value as in Question 2, which indicates that the the Engset model is insensitive to the idle time distribution).
  E 1,1,3 E 0,2,3 2,0,3    s d s d s d d d d d d d d d d d d d c  d c  d c  d  E 2,1,2 E 1,2,2 E 0,3,2 3,0,2     s d s d s d s d d d d d d d d d d d d d d d d d c  d c  d c  d c  d  E 3,1,1 E 2,2,1 E 1,3,1 E 0,4,1 4,0,1      s d s d s d s d s d d d d d d d d d d d d d d d d d d d d d c  d c  d c  d c  d c  d  E 4,1,0 E 3,2,0 E 2,3,0 E 1,4,0 E 0,5,0 5,0,0       

INDEX Technical University of Denmark DTUPhotonics, Networks group

485 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 5.14 Question 1:

(exam 2009)

From the given parameters we get: = =1 1 = 1+ 2 5 [erlang] 2

a =

A = Sa= Question 2:

 j  j  j 

   

State 0 1 2 3 Time congestion becomes:

q(i) 1 5 10 10

p(i)
1 26 5 26 10 26 10 26

E = p(3) = =

10 26

5 = 0.3846 13

To nd the trac congestion C we rst nd the carried trac:


3

=
i=0

i p(i)

486 Y = 1 = 55 26 AY = A
5 2

INDEX 5 10 10 +2 +3 26 26 26

C =

5 2

55 26

= It is simpler to use (5.46):

2 = 0.1538 13

C = = = The call congestion becomes (5.49): B =

Sn E S 53 5 5 13 2 13 q.e.d.

2 (1 + 1) 13 (1 + ) C = 2 1+C 1 + 13

4 = 0.2667 15

Question 3: The completed state transition diagram is shown below. We may also nd the call congestion B as the time congestion of a system with one source less (Arrival theorem). This system has the relative state probabilities q(i) { 1 : 4 : 6 : 4 }
 j  j  j 

   

and thus the time congestion

4 , 15

q.e.d.

Alternatively, in (5.45) B is expressed directly by E.

INDEX


487

   s d s d s d d d d d d d 3 3 3 d2 d4 d6 d d d c  d c  d c  d  6 E 2,1,2 4 E 1,2,2 2 E 0,3,2 3,0,2     s d s d s d s d d d d d d d d d 2 2 2 2 d2 d4 d6 d8 d d d d c  d c  d c  d c  d  8 E 3,1,1 6 E 2,2,1 4 E 1,3,1 2 E 0,4,1 4,0,1      s d s d s d s d s d d d d d d d d d d d 10 1 1 1 1 1 d2 d4 d6 d8 d d d d d d c  d c  d c  d c  d c  d  10 E 4,1,0 8 E 3,2,0 6 E 2,3,0 4 E 1,4,0 2 E 0,5,0 5,0,0      

2,0,3

4 E 1,1,3



2 E 0,2,3



Question 4: The relative state probabilities, which all should be divided by 832 to be normalized, becomes:

k 3 2 1 0 q(j)

q(k) 320 320 160 32 832 j 80 40 10 1 131 0 160 120 40 5 325 1

q(i, j, k) 80 120 60 10 270 2 40 40 10 90 3 10 5 15 4 1 1 5

488 For state (1,2,2) we have the following ows: Flow out = {4 + 2 + 2} p(1, 2, 2) = 8 = Flow in: 120 832

INDEX

960 832

= 4 p(2, 1, 2) + 3 p(0, 2, 3) + 6 p(1, 3, 1) = 4 = 120 80 +3 + 6 40 832 832

960 832

Thus Flow out = Flow in for this node. This ow balance is fullled for all nodes. Question 5: In state (i, j, k), j is the number of idle sources generating new call attempts, each source having the rate 2. From the state transition diagram we notice that a xed value of j corresponds to a column of states. Thus the number of call attempts per time unit can be written as (remember that i is a function of j and k):
S min(n,Sj)

na =
j=0 5 k=0 min(3,5j)

j (2) p(i, j, k)

=
j=0 k=0

j (2) p(i, j, k)

1 (2) {p(4, 1, 0) + p(3, 1, 1) + p(2, 1, 2) + p(1, 1, 3)} + 2 (2) {p(3, 2, 0) + p(2, 2, 1) + p(1, 2, 2) + p(0, 2, 3)} + 3 (2) {p(2, 3, 0) + p(1, 3, 1) + p(0, 3, 2)} + 4 (2) {p(1, 4, 0) + p(0, 4, 1)} + 5 (2) {p(0, 5, 0)}

INDEX na = 2 = 325 270 90 15 1 +4 +6 +8 + 10 832 832 832 832 832

489

2400 [call attempts per time unit] 832

Call attempts arriving in states (i, j, 3) are blocked. So number of blocked call attempts per time unit becomes:
Sn

nb =
j=0 2

j (2) p(i, j, n)

=
j=1

j (2) p(2 j, j, 3)

= 1 (2) p(1, 1, 3) + 2 (2) p(0, 2, 3) = 2 = 80 160 +4 832 832

640 832

Thus the call congestion becomes: B = B =


Updated: 2009-06-23

nb 640 = na 2400 4 15 q.e.d.

490 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 6.14

(Exam 2004)

Loss system and overow trac

We consider a full accessible loss system with n = 3 channels. The system is oered Pascal trac, and in state i the arrival rate is (S +i). The number of sources is S = 4. The arrival rate of an idle source is = 1/3. The mean holding time is chosen as time unit (1 = 1). All time intervals are exponentially distributed. The state of the system is dened as the number of busy channels. (Note: An Erlang-B table covering n =1, 10 (step = 1) and A = 0, 10 (step = 0.25) was attached). 1. Show that the oered trac is A = 2 [erlang] and that the peakedness is 1.5 . 2. Construct the state transition diagram and nd under the assumption of statistical equilibrium the state probabilities p(i), i = 0, 1, . . . , 3. 3. Find the time congestion E, the call congestion B, and the trac congestion C. 4. Calculate the trac congestion C by using Fredericks-Haywards method. 5. Calculate the trac congestion C using Sanders method. Assume that the above trac is overow trac from an equivalent system with 4 channels which are oered 5 erlang. 6. Find the trac congestion C using Wilkinson-Bretschneiders ERT-method.

INDEX Antal betjeningssteder n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

491

1
0.2000 0.3333 0.4286 0.5000 0.5556 0.6000 0.6364 0.6667 0.6923 0.7143 0.7333 0.7500 0.7647 0.7778 0.7895 0.8000 0.8095 0.8182 0.8261 0.8333 0.8400 0.8462 0.8519 0.8571 0.8621 0.8667 0.8710 0.8750 0.8788 0.8824 0.8857 0.8889 0.8919 0.8947 0.8974 0.9000 0.9024 0.9048 0.9070 0.9091

2
0.0244 0.0769 0.1385 0.2000 0.2577 0.3103 0.3577 0.4000 0.4378 0.4717 0.5021 0.5294 0.5541 0.5765 0.5968 0.6154 0.6324 0.6480 0.6624 0.6757 0.6880 0.6994 0.7101 0.7200 0.7293 0.7380 0.7462 0.7538 0.7611 0.7679 0.7744 0.7805 0.7863 0.7918 0.7970 0.8020 0.8067 0.8112 0.8155 0.8197

3
0.0020 0.0127 0.0335 0.0625 0.0970 0.1343 0.1726 0.2105 0.2472 0.2822 0.3152 0.3462 0.3751 0.4021 0.4273 0.4507 0.4725 0.4929 0.5119 0.5297 0.5463 0.5618 0.5764 0.5902 0.6031 0.6152 0.6267 0.6375 0.6478 0.6575 0.6667 0.6755 0.6838 0.6917 0.6992 0.7064 0.7133 0.7198 0.7261 0.7321

4
0.0001 0.0016 0.0062 0.0154 0.0294 0.0480 0.0702 0.0952 0.1221 0.1499 0.1781 0.2061 0.2336 0.2603 0.2860 0.3107 0.3343 0.3567 0.3781 0.3983 0.4176 0.4358 0.4531 0.4696 0.4851 0.4999 0.5140 0.5273 0.5400 0.5521 0.5637 0.5746 0.5851 0.5951 0.6047 0.6138 0.6226 0.6309 0.6390 0.6467

5
0.0000 0.0002 0.0009 0.0031 0.0073 0.0142 0.0240 0.0367 0.0521 0.0697 0.0892 0.1101 0.1318 0.1541 0.1766 0.1991 0.2213 0.2430 0.2643 0.2849 0.3048 0.3241 0.3426 0.3604 0.3775 0.3939 0.4096 0.4247 0.4392 0.4530 0.4663 0.4790 0.4912 0.5029 0.5141 0.5249 0.5353 0.5452 0.5548 0.5640

6
0.0000 0.0000 0.0001 0.0005 0.0015 0.0035 0.0069 0.0121 0.0192 0.0282 0.0393 0.0522 0.0666 0.0825 0.0994 0.1172 0.1355 0.1542 0.1730 0.1918 0.2106 0.2290 0.2472 0.2649 0.2822 0.2991 0.3155 0.3313 0.3467 0.3615 0.3759 0.3898 0.4031 0.4160 0.4285 0.4405 0.4521 0.4633 0.4741 0.4845

7
0.0000 0.0000 0.0000 0.0001 0.0003 0.0008 0.0017 0.0034 0.0061 0.0100 0.0152 0.0219 0.0300 0.0396 0.0506 0.0627 0.0760 0.0902 0.1051 0.1205 0.1364 0.1525 0.1688 0.1851 0.2013 0.2174 0.2332 0.2489 0.2642 0.2792 0.2939 0.3082 0.3221 0.3356 0.3488 0.3616 0.3740 0.3860 0.3977 0.4090

8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0004 0.0009 0.0017 0.0031 0.0052 0.0081 0.0120 0.0170 0.0232 0.0304 0.0388 0.0483 0.0587 0.0700 0.0821 0.0949 0.1082 0.1219 0.1359 0.1501 0.1644 0.1788 0.1932 0.2075 0.2216 0.2356 0.2493 0.2629 0.2761 0.2892 0.3019 0.3143 0.3265 0.3383

9
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0009 0.0016 0.0027 0.0043 0.0066 0.0096 0.0133 0.0180 0.0236 0.0301 0.0375 0.0457 0.0548 0.0646 0.0751 0.0862 0.0978 0.1098 0.1221 0.1347 0.1474 0.1602 0.1731 0.1860 0.1989 0.2117 0.2243 0.2368 0.2491 0.2613 0.2732

10
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0008 0.0014 0.0023 0.0036 0.0053 0.0076 0.0105 0.0141 0.0184 0.0234 0.0293 0.0358 0.0431 0.0511 0.0598 0.0690 0.0787 0.0889 0.0995 0.1105 0.1217 0.1331 0.1446 0.1563 0.1680 0.1797 0.1914 0.2030 0.2146

Erlangs Bformel E1,n (A)

492 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 6.14

(Exam 2004)

This is the truncated Pascal system dealt with in Sec. 5.7. Question 1: The oered trac is obtained by using the formul for the Engset case by letting S = 4 1 sources (5.68) and = 3 (5.69). By using (5.10) and (5.11) we get: =

1 = , 3 A = S 1+
1 3 1 1 3

= (4)

A = 2 [erlang] . In a similar way we nd from (5.23): Z = = 1 1+ 1 1 3 2


1 3

Z =

Question 2: The state transition diagram becomes (cf. Fig. 8.6): By using the cut equations we get the following relative state probabilities:

INDEX
4 3 5 3 6 3

493
............. ............. ............. .............. . .............. . .............. . ...... ..... .......... .......... .... .......... .......... ..... .......... ...... .... ........... . ....... .... ........... . ....... ..... ........... ................... ..... ..... ... ........ ....... ........ ....... ....... ....... ................... ..... ..... ... ... .. .. .. .. ... .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . ... . . . . . . . ... . ... .. . ....... ........ .......... ....... ..... ............. ....... ........ .................. ....................... ....................... ... . ...... .... .... .. .. ........ .. .... ..... ... ... .. .... ........ .... ... ..... ... ...... ...................... .................. .................. .................. ................. ...............

q(0) = 1 q(1) = q(2) = q(3) = Sum =


4 3 10 9 20 27 113 27

p(0) = p(1) = p(2) = p(3) =

27 113 36 113 30 113 20 113

= 0.2389 = 0.3186 = 0.2655 = 0.1770

Sum = 1 = 1.0000

Question 3: Time congestion The time congestion E is the proportion of time all channels are busy and it is obtained from the state probabilities: E = p(3) = 20 = 0.1770 113

Call congestion The call congestion B is the proportion of call attempts blocked and it is obtained as the ratio between the number of blocked call attempts per time unit and the total number of call attempts per time unit. We nd: (S + 3) p(3) (S + 0) p(0) + (S + 1) p(1) + (S + 2) p(2) + (S + 3) p(3)
4 3

B =

27 113

+5 3

7 3 36 113

20 113 +6 3

30 113

+7 3

20 113

B =

35 = 0.2303 152

Alternatively we may use the formul for the Engset case, remembering that S and are

494 negative. For example by using (5.45) we get: B = (S n) E (1 + ) S + (S n) E

INDEX

20 (4 3) 113 (1 1 ) 3 = 1 20 4 + (4 3) 113 ( 3 )

35 152

q.e.d.

We may also use the arrival theorem and nd the call congestion as the time congestion in a system with one source less, that is with S = 5 sources. Trac congestion The trac congestion C is the ratio between the blocked trac and the oered trac: C= AY 2Y = A 2

The carried trac Y is obtained from the state probabilities:


3

=
i=0

i p(i) 36 30 20 27 +1 +2 +3 113 113 113 113

= 0 Y =

156 113

2 156 113 C = 2 C = 35 = 0.3097 113

Also in this case we may use formul for the Engset system, for example the simple formula (5.46): C = = = Sn E S 4 3 20 4 113 35 152

INDEX Question 4

495

Fredericks-Haywards method is presented in Sec. 6.5. A trac with mean value A = 2 erlang and peakedness Z = 1.5 oered to n = 3 channels has the same blocking probability as Erlangs loss system with A/Z = 4/3 erlang oered to n/Z = 2 channels. Thus the call congestion is obtained by using Erlangs B-formula: C = E2 (1.3333). In this case we can easily use the recursion formula (4.29) to nd the numerical value: E0 = 1 E1 = 1 4 = 4 7 1+ 3 1
4 3 4 3

E2 = Thus we nd:

2+

4 7 4 7

4 3

8 29

8 = 0.2759 29 We may also use the table and e.g. use linear interpolation between the values for A = 1.25 and A = 1.50 for n = 2 channels. Then we nd C = 0.2752 which is very close to the above value. C= Question 5 Sanders method is described in Sec. 6.6.2. The variance of the oered trac is: Var = mean peakedness = A Z = 2 1.5 = 3 As the mean value (A = 2) is less than the variance (var = 3) we add one channel with a constant trac 1 erlang. Then the total oered trac is 3 erlang, and the number of channels is 4. The peakedness is now one, as the variance is still three. The lost trac from this system is the same (approximate method) as the trac lost from the original system. Using the table we nd the lost trac: Alost = 3 E4 (3) = 3 0.2061 = 0.6183 As this trac is lost from the original system, the call congestion becomes: 0.6183 C= = 0.3092 2 Question 6 Using the ERT-method we have a system of 4 + 3 = 7 channels which are oered 5 erlang. The trac lost becomes: 5 E7 (5) = 0.6026

496

INDEX

This is the trac lost from the (original) system and thus the call congestion of the original system becomes: 0.6021 = 0.3013 C= 2 When we oer 5 erlang to 4 channels, then the overow trac will have the mean value (6.15) m1 = 1.992 and the peakedness (6.16) Z = 1.519, corresponding to a variance Var = 3.038. This is very close to the values in the exercise. The exact values of the parameters of the equivalent group for obaining an overow trac with mean 2 and peakedness 1.5 are obtained by using a computer program: Ax = 4.876 nx = 3.826 Using these values the ERT-method yields the trac congestion C = 0.3007 In conclusion, we obtain the values: BPP Sanders ERT C = 0.3097 C = 0.3093 C = 0.3007

Fredericks-Hayward C = 0.2759

We notice that the values are very similar except for Fredericks-Haywards method, which usually is one of the best. But this is an extreme case with very few channels and very high blocking. We also notice that only the BPP-model works with dierent types of blocking probabilities, and that the trac congestion C is the relevant measure. Historically only the time congestion E and the call congestion B have been considered, and they are quite dierent.
Updated: 2008-03-27

INDEX Technical University of Denmark DTUPhotonics, Networks group

497 Teletrac Engineering & Network Planning Course 34 340

Exercise 7.1

(exam 1983)

LOSS SYSTEM WITH MULTIPLE ACCESSIBILITY We consider a loss system with 3 identical servers, serving two dierent types of customers, which arrive according to Poisson processes with intensities: type 1: 1 [customers/time unit], type 2: 2 [customers/time unit]. Both types of customers have the same exponentially distributed service time distribution with mean value m = 1/ [time units]. Customers of type 1 has full accessibility to all three servers. Customers of type 2 are blocked if more than one server is busy at the arrival time. The state of the system is dened as the total number of customers being served. 1. Construct a one-dimensional state transition diagram for this system. 2. Find under the assumption of statistical equilibrium the state probabilities of the system. 3. Find, expressed by the state probabilities, the call congestion for customers of type 1 and type 2. We now dene the state of the system as (i, j), where i is the number of type 1 customers being served, and j is the number of type 2 customers being served. 4. Construct a two-dimensional state transition diagram for this system. 5. Assume, that the state probabilities are known, and nd the trac carried by customers of type 2, when a total of (i + j =) 0, 1, 2, or 3 customers are being served.

498 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 7.1 Background: The model considered can be applied if we want to give two trac streams dierent service levels on the same channel group, i.e. one trac stream higher priority than another trac stream. This principle, which is called trunk reservation, is dierent from the principle class limitation, where it is the total number of calls of a given type, which decides whether a new call attempt of this type can be accepted. One example is incoming (type 1) and outgoing (type 2) trac between a PABX (Private Automatic Branch eXchange) of a company and the public telephone exchange. Another application is for cellular systems, where we want to give priority to hand-over calls over new call attempts (guard channels). Question 1:
 i 

1 + 2

 q i 

1 + 2

 q i 

 q 

Question 2: The conditions for statistical equilibrium will always be fullled (loss system with a nite number of states), and we get by applying the cut equations and expressing all state probabilities by state p(0): 1 + 2 p(1) = p(0) , p(2) = 1 + 2 p(1) 2 (1 + 2 )2 p(0) , 22 1 p(2) 3

p(3) =

(1 + 2 )2 1 = p(0) , 3! 3

INDEX where p(0) is obtained from the normalisation condition: p(0) + p(1) + p(2) + p(3) = 1 . Question 3:

499

As the arrival process is a Poisson process, then we have call congestion B = time congestion E = trac congestion C (PASTAproperty): B1 = E1 = C1 = p(3) , B2 = E2 = C2 = p(2) + p(3) . Question 4: 1  2
'  T  E 

0,2

1,2

 c 1 '  T

0,1

c 1  E

 c 1 ' 

'  T 2

1,1

 E 

2,1

0,0

 c 1 E ' 

1,0

c 1  E ' 

2,0

 E 

3,0

We notice that this state transition diagram is not reversible (cf. Sec. 7.2). We have to solve 9 linear equations with 9 unknown to nd the state probabilities. When the mean holding time is the same for all types of calls, we can reduce the state transition diagram to one dimension. But when the mean holding times are dierent, we have to solve the multi-dimensional state transition diagram.

500 Question 5:

INDEX

In state (i, j) the carried trac if type 1 is i erlang and the carried trac type 2 is j erlang. Therefore, we have: p(i + j = 0) = p(0, 0) , Type 1: = Y01 = 0 , Type 2: = Y02 = 0 . p(i + j = 1) = p(1, 0) + p(0, 1) , Type 1: = Y11 = p(1, 0) , Type 2: = Y12 = p(0, 1) . p(i + j = 2) = p(2, 0) + p(1, 1) + p(0, 2) , Type 1: = Y21 = 2 p(2, 0) + 1 p(1, 1) , Type 2: = Y22 = 1 p(1, 1) + 2 p(0, 2) . p(i + j = 3) = p(3, 0) + p(2, 1) + p(1, 2) , Type 1: = Y31 = 3 p(3, 0) + 2 p(2, 1) + 1 p(1, 2) , Type 2: = Y32 = 1 p(2, 1) + 2 p(1, 2) . The total carried trac thus becomes: Type 1: Y1 = Y01 + Y11 + Y21 + Y31 , Type 2: Y2 = Y02 + Y12 + Y22 + Y32 . As a control, we of course have (cf. Question 3): Y1 = A1 (1 B1 ) , Y2 = A2 (1 B2 ) . p(i + j = x) corresponds to p(x) in the one-dimensional model (cf. Question 1).
Updated: 2008-03-27

INDEX Technical University of Denmark DTUPhotonics, Networks group

501 Teletrac Engineering & Network Planning Course 34 340

Exercise 7.4

(exam 1990)

LOSS SYSTEMS WITH MUTUAL OVERFLOW We consider a loss system with 2 servers. Call attempts arrive according to a Poisson process with intensity 20 calls per hour. The holding times are exponentially distributed with mean value 180 seconds. In the following it will be sucient to give numerical answers.

1. Find the oered trac. 2. Calculate by using the recursion formula for Erlangs B-formula the congestion of the system (time congestion equals call congestion and trac congestion). (Show the individual steps of the recursion).

The above system is in the following called a subsystem. We now consider a system made up of two subsystems of the above type (total arrival rate equals 40 calls per hour, in total 4 fully accessible servers).

3. Construct the one-dimensional state transition diagram for the total system, and calculate under the assumption of statistical equilibrium the state probabilities p(i) (i = 0, 1, 2, 3, 4).

We now keep record of the system a call (a server) belongs. A call attempts rst looks for an idle server in its own subsystem. If both servers in this system are busy, it looks for an idle server in the other subsystem. If both servers also are busy in this system the call is blocked. The state of the system is denoted with (i, j) 0 i, j 2 ,

where i, respectively j, denotes the number of busy servers in subsystem 1, respectively 2.

4. Construct the two-dimensional state transition diagram for this system, using the following states.

502

 

02

 

12

 

INDEX

22

 

01

 

11

 

21

 

00

 

10

 

20

5. Calculate the state probabilities of the two-dimensional state transition diagram by exploiting the symmetry and using the aggregated state probabilities calculated in Question 3. (All states in Question 5 with a certain number of busy servers are in Question 3 aggregated into a single state).

INDEX Technical University of Denmark DTUPhotonics, Networks group

503 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 7.4 Question 1: The oered trac A is equal to the average number of call attempts per mean holding time: A= , 1 } 20 [hour/hour] ,

A = {20

A = 1 erlang . There is one call attempt per mean holding time. Question 2: We calculate Erlangs Bformula for n = 2 circuits and A = 1 erlang. We use the recursion formula in Chap. 4 (4.29): A En1 (A) En (A) = , n + A En1 (A) E0 (A) = 1 . For A = 1 we get: E1 (1) = 11 1 = , 1+11 2
1 2

where:

1 1 2 E2 (1) = 2+1

1 = 0.2 . 5

We may control this by using Erlangs Bformula (4.10) directly: E2 (A) =


A2 2

1+A+ 1 5

A2 2

E2 (A) =

q.e.d.

504 Question 3:

INDEX

We consider a full accessible loss system with n = 4 channels and A = 2 erlang. If we choose 1 = 3 minutes as time unit, we get = 2 and the following state transition diagram:
     j j j j

    

If we put the relative value of state zero equal to one, we nd the relative probabilities: qr (0) = 1 , qr (1) = 2 , qr (2) = 2 , qr (3) = qr (4) = 4 , 3 2 , 3

Sum = 7 . As the total sum must add to one, we get the following absolute probabilities: p(0) = p(1) = p(2) = p(3) = p(4) = 3 , 21 6 , 21 6 , 21 4 , 21 2 . 21

This may also be obtained directly from the truncated Poisson distribution (4.9) (A = 2

INDEX erlang): Ai p(i) = 4 i! , A ! =0 Question 4: i = 0, 1, , 4 .

505

It is understood that a call keeps the allocated circuit in the other subsystem, even though a circuit becomes idle in its own system.
 i  

02

 j i  

12

 j  

22

1 1 1

2 2 1

2
  i  

2
i  

2
 

01

  j

11

  j

21

1 1 1

2 2 1

1
  i 

1
i 

1


00

  j

10

  j

20

The departure (death) intensities are obvious. The total arrival intensity is in every state equal to 2 (in state (2, 2) all call attempts are blocked, and therefore a call attempt does not result in a state transition, and the arrow/intensity is not shown on the gure). If it is possible, the total arrival intensity is divided into 1 in each of the two directions. If there only is one direction, this will get the total intensity (rate) 2. Extra: It is easy to generalize the model. For example, we can introduce restrictions so that a call from one group (subsystem) only is allowed to occupy a circuit in the other group if both circuits of this are idle. The intensities (2,1) (2,2) and (1,2) (2,2) then becomes equal to one in stead of two. Question 5: Notice, that the state transition diagram is not reversible. (This is e.g. done by looking at the ow clockwise and the ow anti-clockwise in 4 neighbouring states). As we both in Question 3 and Question 4 have an oered trac 2 erlangs to 4 circuits with full accessibility, then it is in both cases the same system we consider. Therefore, we have from Question 3:

506

INDEX

p(0) = p(1) = p(2) = p(3) = p(4) =

3 = p(0, 0) , 21 6 = p(0, 1) + p(1, 0) , 21 6 = p(0, 2) + p(1, 1) + p(2, 0) , 21 4 = p(2, 1) + p(1, 2) , 21

2 = p(2, 2) . 21 Furthermore, because of symmetry we must have: p(0, 1) = p(1, 0) = p(2, 1) = p(1, 2) = p(0, 2) = p(2, 0) . The only unknown are thus states, where in total two channels are busy: 6 p(0, 2) + p(1, 1) + p(0, 2) = 2 p(0, 2) + p(1, 1) = . 21 By looking at the node balance equation for state (0, 2) we nd: 5 4 p(0, 2) = p(0, 1) + p(1, 2) = , 21 = = In summary, we thus have: 5 8 8 , p(1, 2) = , p(2, 2) = , 84 84 84 12 14 8 p(0, 1) = , p(1, 1) = , p(2, 1) = , 84 84 84 12 12 5 p(0, 0) = , p(1, 0) = , p(2, 0) = . 84 84 84 We notice that the total sum of probabilities is one. Another control also shows that all node balance equations are fullled. p(0, 2) =
Updated: 2010-03-24

3 , 21 2 , 21

p(0, 2) = p(2, 0) = p(1, 1) = 7 1 = . 42 6

5 , 84

INDEX Technical University of Denmark DTUPhotonics, Networks group

507 Teletrac Engineering & Network Planning Course 34 340

Exercise 7.8

(exam 1998)

MOBILE COMMUNICATION SYSTEM WITH TWO TYPES OF TRAFFIC A mobile communication system with S = 4 subscribers has access to n = 3 channels. All calls accepted occupy one channel during an exponentially distributed time interval with mean value 1 = 1 time unit. The system is operated as a loss system. There are two arrival processes: a. Outgoing calls, generated by the S = 4 subscribers (PCTII trac). An idle source 1 generates = 4 call attempts per time unit. b. Incoming calls, arriving according to a Poisson process with arrival rate = 0.8 call attempts per time unit (PCTI trac). An incoming call, which is accepted, occupy both an idle channel and one of the idle sources, which thus becomes busy without making a call attempt itself. The number of busy sources thus always equals the number of busy channels. 1. Find the incoming oered trac Ai , the outgoing oered trac Ao , and the total oered trac At (assume the trac stream considered is alone). 2. Construct the one-dimensional state transition diagram for the system, when the state of the system is dened as number of busy channels. Find the state probabilities under the assumption of statistical equilibrium. 3. Find the time congestion E, the call congestion B, and the trac congestion C for both trac streams. (The trac congestion for outgoing calls is obtained from the total trac congestion and the known trac congestion for incoming calls). 4. Show that the state transition diagram can be interpreted as a state transition diagram for a single PCT-II trac stream, and nd the equivalent number of sources (nonintegral) and the arrival intensity per idle source. 5. We now distinguish between the two types of trac. Construct the twodimensional state transition diagram, where the state of the system (i, j) denotes that there are i incoming calls and j outgoing calls. Is the state transition diagram reversible? The following question was not included at exam. 6. Find time congestion E, call congestion B, and trac congestion C for both trac streams, when we know the state probabilities of the two-dimensional state transition diagram.

508 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 7.8 Question 1: It should be pointed out that the oered trac is dened as the trac carried, when the capacity of the system is innite and other trac streams do not exist. 0.8 = = 0.8 [erlang] . 1

Ain = From the formul (5.7 5.11): =

1 , 4

1 = , 4

a =

1 = , 1+ 5 4 = 0.8 [erlang] . 5

Aout = S a =

Total oered trac becomes: At = Ain + Aout = 0.8 + 0.8 = 1.6 [erlang] . Question 2: The arrival rate (intensity) in state i is: (4 i) + = (4 i) The departure rate (intensity) in state i is: i = i. Therefore, we get the following state transition diagram: 1 4 + . 4 5

INDEX
36/20 0

509

31/20 1

26/20 2

The relative state probabilities become: 1 which add to a total of 4.7995. The absolute state probabilities then become: p(0) = 0.2084 = p(1) = 0.3750 = p(2) = 0.2907 = p(3) = 0.1260 = 2000 , 9599 3600 , 9599 2790 , 9599 1209 . 9599 1.8 1.395 0.6045

Question 3: Due to the PASTAproperty time, call and trac congestion of the incoming trac are equal: Ein = Bin = Cin = p(3) = 0.1260 . For the outgoing trac we nd: Time congestion: Eout = p(3) = 0.1260 . Call congestion: Bout = 0.25 p(3) = 0.0473 . 1 p(0) + 0.75 p(1) + 0.50 p(2) + 0.25 p(3)

510 Trac congestion: We cannot nd this directly, but indirectly, e.g. from the total carried trac: Yt = 0 p(0) + 1 p(1) + 2 p(2) + 3 p(3) = 1.3342 . Yout = Yt Ain (1 Cin ) = 1.3342 0.8 (1 0.1260) = 0.6350 . Then we can nd the trac congestion: C= Question 4: Aout Yout = 0.206 . Aout

INDEX

For a system with PCT-II trac with a non-integral number of sources, the departure intensity in state i is given by i and the arrival intensity by (S i) . A comparison with the gure in question 2 shows us that: = 1 , Question 5:
03
0.5 3 0.8

1 , 4

S =

1.8 = 7.2 . 0.25

02
1 0.75 2 0.5 0.8

12
2 0.8

01
1 1 1 0.75 0.8

11
2 1 0.5 0.8

21
1 0.8

00
1

10
2

20
3

30

The system is not reversible (cf. Fig. 7.2 in the textbook). For example, the circulation ow of the following states are: Clockwise: Counterclockwise: 1 1 0.8 1 = 0.8

{00 10 11 01 00}:

0.8 0.75 1 1 = 0.6

INDEX Question 6 (extra):

511

The incoming trac arrives according to a Poisson process and due to the PASTAproperty we have: Ein = Bin = Cin = p(3, 0) + p(2, 1) + p(1, 2) + p(0, 3) . The outgoing trac is from a nite number of sources and we have for the time congestion: Eout = Ein . The call congestion is obtained by looking at the total number of call attempts per time unit nt and the number of blocked call attempts per time unit nb . We get: nt = 4 + 3 + 2 + 1 nb = Thus we nd Bout = 1 1 {p(0, 0)} 4 1 {p(1, 0) + p(0, 1)} 4 1 {p(2, 0) + p(1, 1) + p(0, 2)+} 4 1 {p(3, 0) + p(2, 1) + p(1, 2) + p(0, 3)} 4 1 {p(3, 0) + p(2, 1) + p(1, 2) + p(0, 3)} 4 nb . nt

The trac congestion Cout is obtained fra the oered trac Aout = 0.8 [erlang] and the carried trac Yout : Aout Yout Cout = , Aout where Yout = 1 {p(0, 1) + p(1, 1) + p(2, 1)} + 2 {p(0, 2) + p(1, 2)} + 3 {p(0, 3)} The congestion values will of course be the same as found in question 3.
Updated: 2008-04-07

512 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 7.10

(exam 2005)

CDMA cellular system with two service classes We consider a lost calls cleared system with n = 5 channel. Two types of calls are oered to the system. They both have exponentially distributed service times with mean value 1 = 1 [time unit]. Both types arrive according to a Poisson process:

Type one: arrival rate is 1 = 1 call attempt per time unit. Each call requires d1 = 1 channel. Type two: arrival rate is 2 = 0.5 call attempts per time unit. Each call requires d2 = 2 channels. If a call attempt does not obtain both channels, then it is lost.

1. Find the oered trac for each type expressed in number of channels.

We dene the state of the system as the total number of busy channels. A type one call attempt is always accepted in state 0 and 1. In state i it is blocked with probability: b1 (i) =
i 2 5 2

2 i 5.

i As we usually dene 2 = 0 when i < 2, this expression is valid for all states 0 i 5. Thus the probability of accepting a type one call in state i becomes:

a1 (i) = (1 b1 (i)) ,

0 i 5.

A type two call attempt behaves like two single-channel calls and is only accepted if both channels are obtained. Thus the probability of accepting a type two calls in state i becomes: a2 (i) = (1 b1 (i))(1 b1 (i+1)) , 0 i 5.

2. Show that the arrival rates of accepted type one calls, respectively type two calls, as a function of state becomes:

INDEX 1 (0) 1 (1) 1 (2) 1 (3) 1 (4) 1 (5) = = = = = = 1 1


9 10 7 10 4 10

513 2 (0) 2 (1) 2 (2) 2 (3) 2 (4) 2 (5) = = = = = =


1 2 1 2 1 2 1 2

90 100 63 100 28 100

0 0

We now dene the two-dimensional state of the system as {(i, j) , i + j 5}, where i (0 i 5) is the number of channels occupied by type one calls, and j (j = 0, 2, 4) is the number of channels occupied by type 2 calls. The above arrival rates which are functions of the total number of busy channels are still valid. 3. Construct the state transition diagram, using the following structure of states:
                       

04

14

02

12

22

32

00

10

20

30

40

50

4. Show that the state transition diagram is reversible. 5. Find the state probabilities (given: p(0, 0) = 20000/78342). 6. Find the carried trac and the trac congestion for each trac type. 7. Show that the call congestion for a type of call is equal to the trac congestion for the same type of call.

514 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 7.10

(exam 2005)

Background: In third generation (3G) cellular communication systems based on CDMA (Code Division Multiple Access) the capacity of a cell depends on the number of active users in both own cell and neighbouring cells (or more correct: the power used for transmitting the signals). Assume that the nominal capacity of a cell is n channels. Then call attempts may with a positive probability be blocked even if less than n channels are occupied. This is a multi-dimensional system (Chapter 7) with limited accessibility (Chapter 6). Question 1: As we have PCT-I trac, we get: A1 = A2 = Question 2: We have chosen the same blocking probabilities as for Erlangs Interconnection formula = Erlangs ideal grading, where we choose k = 2 channels at random out of n = 5. If i channels are busy, then ri is the probability that both channels we choose are among these i busy channels. We directly nd: b1 (0) b1 (1) b1 (2) b1 (3) b1 (4) b1 (5) Introducing a2 (i) = 1 b2 (i) = (1 bi )(1 bi+1 ) , which means that a two channel call has the same blocking as two successive single channel calls we nd the following state-dependent acceptance a2 (i) probabilities and blocking probabilities b2 (i) for calls requiring two channels: = = = = = = 0 0
1 10 3 10 6 10

1 1 d1 = 1 = 1 [erlang channel] 1 1 1 2 d2 = 2 = 1 [erlang channel] 2 2

a1 (0) a1 (1) a1 (2) a1 (3) a1 (4) a1 (5)

= = = = = =

1 1
9 10 7 10 4 10

INDEX a2 (0) a2 (1) a2 (2) a2 (3) a2 (4) a2 (5) From: 1 (i) = {1 b1 (i)} 1 , 2 (i) = {1 b1 (i)}{1 b1 (i+1)} 2 , we get the result shown in the text of the exercise. = = = = = = 1
90 100 63 100 28 100

515 b2 (0) b2 (1) b2 (2) b2 (3) b2 (4) b2 (5) = = = = = = 0


10 100 37 100 72 100

0 0

1 1

Question 3: We nd the following state transition diagram:


4   10 E 04 ' 14   1 T T

63 200

9 7 4     c 10 c 10 10 E E E

7 50

1 2

9 7 4       c 1 c 1 c 10 c 10 10 E E E E E 00 ' 10 ' 20 ' 30 ' 40 ' 50      

02 ' 12 ' 22 ' 32     1 2 3 T T T T 1


9 20

63 200

7 50

Question 4: From the state transition diagram it is seen that the ow clockwise is equal to the ow counter-clockwise for all four squares, so the process is reversible. Introducing 1 b2 (i) = {1 b1 (i)}{1 b1 (i+1} we have in general as shown in the following gure:

516
. . . . .. . .. . . . . . . . . . . . . . . . . . .

INDEX
. . .
. . . . . . . . . . . . . . . . ... . .. .. . .

. . .

{1b  1 (i+j +2)} 1 ..    .. ................. ................ . ............................................................................ ....................................................................... ... ........................................... i, j +2 ..... i+1, j +2 ..... . . .................. ................... .. .............. . .............. .. ........................................................................ . ........................................................................ . . .... ..  .   .  . . . . . . . . . . .. .. . . . .. . .. . . . .. .. . . . . . . . (i + 1) 1 . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . .

. . . . .. . .. . . . . . . . . . . . . . . . . . .

. . .

. . . . . . . . . . . . . . . . .. . . .. .. . . .

. . .

{1b2 (i+j)} 2

j +2 2 2

{1b2 (i+j +1)} 2

........................................... i, j . . ............................................ ... . . .


. .. .. . .. . . . . . . . . . . . . . . . .

. ............................................................................ ....................................................................... ... . ........................................................................... ........................................................................... .. ..

{1b1 (i+j)} 1 


  . . . . ..
.. . ... . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. .. . .

j +2 2 2


i+1, j
. . . . . . . . . . . . . . . . . . . .. .. .. . . .

.. ................. ................ . . .. ................. ................

. . .

. . . . . . . . . . . . . . . . . .. . .. . . .

(i + 1) 1

. . .

. . .

. . .

Question 5: We notice that the system does not have product form. By choosing q(0, 0) = 20 000 we nd the following relative state probabilities using local balance equations (reversibility):

q,4 q,2 q,0

2205 1575 22570 10000 53567 20000 78342 31575 Total q0,

630 9000 20000 29630 q1,

3150 10000 13150 q2,

420 3000 3420 q3,

525 525 q4,

42 42 q5,

As the total sum adds to 78342 we obtain the state probabilities by diving all terms by this constant.

INDEX Question 6: The carried trac is obtained from the marginal state probabilities:
5

517

Y1 =
i=0

i p(i, )

= =

1 29630 + 2 13150 + 3 3420 + 4 525 + 5 42 78342 68500 78342

= 0.8744 As the oered trac is A1 = 1 [erlang] we nd the trac congestion: C1 = C1 = In a similar way we nd:
5

A1 Y1 A1 9842 = 0.1256 . 78342

Y2 =
j=0

j p(, j)

= =

2 22570 + 4 2205 78342 53960 78342

= 0.6888 C2 = C2 = A2 Y2 A2 24382 = 0.3112 78342

518 Question 7

INDEX

To nd the call congestion we have to sum over all states. For type one the number of call attempts blocked per time unit n1 becomes, using the blocking probabilities from Question 2 (taking the states row by row):

n1 1 3 6 = 0 p(0, 0) + 0 p(1, 0) + p(2, 0) + p(3, 0) + p(4, 0) + 1 p(5, 0) 1 10 10 10 + 3 6 1 p(0, 2) + p(1, 2) + p(2, 2) + 1 p(3, 2) 10 10 10 6 p(0, 4) + 1 p(1, 4) 10

Inserting the state probabilities we nd:

1 n1 3 6 78342 = 10000 + 3000 + 525 + 42 1 10 10 10 + 1 3 6 10000 + 9000 + 3150 + 420 10 10 10 6 1575 + 630 10

n1 9842 = . 1 78342

As the number of call attempts oered per time unit is 1 = 1 we get:

B1 =

9842 = C1 , 78342

q.e.d.

Thus we get the same value as for the trac congestion C1 obtained in Question 6. In a similar way we nd for trac stream 2 the average number of call attempts blocked per

INDEX time unit (taking the states row by row):

519

n2 1 37 72 = 0 p(0, 0) + p(1, 0) + p(2, 0) + p(3, 0) + 1 p(4, 0) + 1 p(5, 0) 2 10 100 100 + 72 37 p(0, 2) + p(1, 2) + 1 p(2, 2) + 1 p(3, 2) 100 100

+ 1 p(0, 4) + 1 p(1, 4) 37 72 n2 1 20000 + 10000 + 3000 + 525 + 42 78342 = 2 10 100 100 + 37 72 10000 + 9000 + 3150 + 420 100 100

+ 1575 + 630 n2 24382 = 2 78342 Thus we get the same value as for the trac congestion obtained in question 6: B2 = 24382 = 0.3112 = C2 , 78342 q.e.d.

We may also nd the call congestion values from the global state probabilities because the arrival process is a Poisson process. In fact, due to the PASTA-property the trac, call, and time congestions are equal (proper denition of time congestion).
Updated: 2008-04-03

520 Additional discussion on global state probabilities From the above we get the following global state probabilities: p(0) = p(0, 0) = 20000 78342 20000 78342 20000 78342 12000 78342 5250 78342 1092 78342

INDEX

p(1) = p(1, 0) =

p(2) = p(2, 0) + p(0, 2) =

p(3) = p(3, 0) + p(1, 2) =

p(4) = p(4, 0) + p(2, 2) + p(0, 4) =

p(5) = p(5, 0) + p(3, 2) + p(1, 4) =

We may obtain the global state probabilities in an alternative way by: 1. Calculating global state probabilities assuming full accessibility. 2. Down-scaling the state probabilities by the blocking probabilities for single-slot trac, as a two-slot call behaves like 2 single-slot calls. This is valid because blocking probabilities only depend on the global state probabilities, and not upon the number of calls of a given type. Without state-dependent blocking we have the state transition diagram shown in the following gure:

INDEX
  1E 04 ' 14   1 T T

521

1 2

1 2

      c 1 c 1 c 1 c 1 1E E E E E 00 ' 10 ' 20 ' 30 ' 40 ' 50      

    c 1 c 1 1E E E 02 ' 12 ' 22 ' 32     1 2 3 T T T T

1 2

1 2

1 2

1 2

By choosing q(0, 0) = 120 we nd the relative state probabilities using local balance equations (reversibility):

q,4 q,2 q,0

30 160

15 60

15 60 120 195 q1, 30 60 90 q2, 10 20 30 q3, 5 5 q4, 1 1 q5,

326 120 426 195 Total q0,

The relative global state probabilities become: p(0) = p(0, 0) = 120 p(1) = p(1, 0) = 120 p(2) = p(2, 0) + p(0, 2) = 120 p(3) = p(3, 0) + p(1, 2) = 80 p(4) = p(4, 0) + p(2, 2) + p(0, 4) = 50 p(5) = p(5, 0) + p(3, 2) + p(1, 4) = 26

The scaling probabilities are shown below together with the resulting state probabilities:

522 State 0 1 2 3 4 5 Relative Down-scaling Relative Absolute probability factor probability probability 120 120 120 80 50 26 1 1 1
9 10 9 10 9 10

INDEX

120 120 120 72


7 10 7 10 63 2

20000 78342 20000 78342 20000 78342 12000 78342 5250 78342 1092 78342

4 10

819 125

We observe that we get the same result as above. For Poisson arrival process we can nd the call congestion from the global state probabilities: B1 = 0 p(0) + 0 p(1) + = 1 3 6 p(2) + p(3) + p(4) + 1 p(5) 10 10 10

0 + 0 + 2000 + 3600 + 3150 + 1092 , 78342 9842 78342 q.e.d.

B1 = For type 2 we get:

B2 = 0 p(0) + =

1 37 72 p(1) + p(2) + p(3) + 1 p(4) + 1 p(5) 10 100 100

0 + 2000 + 7400 + 8640 + 5250 + 1092 , 78342 24382 78342 q.e.d.

B2 =

Due to the Pasta-property we of course get identical values for time, call, and trac congestion. For non-Poisson arrival processes (e.g. Engset and Pascal) we cannot calculate time call and trac congestion from the global state probabilities, only the time congestion.
Updated: 2008-04-03

INDEX Solution by generalized algorithm

523

The exercise is from year 2005. The generalized algorithm had not been published at that time, but it is in fact the most appropriate algorithm for this problem. Let us rst calculate the global state probabilities without state-dependent blocking. We of course get the same as by the convolution algorithm above:

State Type 1 Type 2 x 0 1 2 3 4 5 Total q1 (x) 0 1


1 2 1 3 1 6 1 12

Total q(x) 1 1 1
2 3 5 12 13 60 516 120

q2 (x) 0 0
1 2 1 3 1 4 2 15

p(x)
120 516 120 516 120 516 80 516 50 516 26 516

We may modify these state probabilities to take account of the state dependent blocking as above. A more elegant solution is, however, to modify the algorithm so that formula (7.40) becomes: di Ai x di 1 Zi p(x di ) pi (x di ) {1 bd (x di )} x Zi x Zi

pi (x) =

For Poisson arival processes (Z = 1) the second term becomes zero and we get:

pi (x) =

di Ai p(x di ) {1 bd (x di )} x

The results are shown in the following table:

524 State x 0 1 2 3 4 5 Total Type 1 Type 2 Total q1 (x) 0 1


1 2 3 10 21 200 21 1000

INDEX Type 1 Type 2 p1 (x) 0


20000 78342 10000 78342 6000 78342 2100 78342 420 78342

Total p(x)
20000 78342 20000 78342 20000 78342 12000 78342 5250 78342 1092 78342

q2 (x) 0 0
1 2 3 10 63 400 84 2500

q(x) 1 1 1
3 5 21 80 273 5000 516 120

p2 (x) 0 0
10000 78342 6000 78342 3150 78342 672 78342

From the table we nd the carried trac of each type:


5

Yi =
j=0

i pi (j) ,

i = 1, 2 .

We nd the same results as in Question 6. Due to the Poisson arrival process (Pasta-property), this is equal to the call congestion and time congestion. The call congestion may always be obtained from trac congestion by using (7.46). For systems with limited accessibility the time congestion is obtained by summing up over all states the global state probalility multiplied by the blocking probability in this state:
5

E1 =
j=0 5

b1 (i)p(i) ,

E2 =
j=0

b2 (i)p(i) ,

For non-Poisson arrivals time, call and trac congestion will of course be dierent.
2008-04-07

INDEX Technical University of Denmark DTUPhotonics, Networks group

525 Teletrac Engineering & Network Planning Course 34 340

Exercise 7.11

(Exam 2008)

Service-integrated system with two service classes We consider a blocked calls cleared system with n = 5 channel. This system is oered two trac streams: Stream one: This is PCT-I trac with Poisson arrival rate = 2 call attempts per time unit. The mean service time is 1 = 0.5 [time unit]. Each call requires d1 = 2 channels. 1 If a call attempt does not obtain both channels simultaneously, then it is blocked. Stream two: This is PCT-II Engset trac oered by S = 6 sources. When a source is idle it generates = 1 call attempt per time unit, and the mean service time is 1 = 1 2 time unit. Each call requires d2 = 1 channel. 1. Find for each stream the oered trac and peakedness expressed in number of connections, number of channels. 2. Find the one-dimensional state probabilities for each of the two trac streams. 3. Find the global state probabilities of the system using the convolution algorithm. 4. Find the global state probabilities of the system using the generalized state-space based algorithm. 5. Find time congestion E, trac congestion C, and call congestion B for both trac streams.

526 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 7.11 Question 1:

(exam 2008)

Oered trac in unit [number of connections]: Stream 1: PCT-I trac A1 = 1 s1 = 2 1 2

= 1 erlang [connections] Z1 = 1 [channels] Stream 2: PCT-II trac A2 = S 1+

= 3 erlang [connections] Z2 = = 1 1+ 1 [connections] 2

The peakedness for PCT-II trac is for example given in (5.23). Oered trac in unit [number of channels]: Stream 1: A1 = 1 s1 d1 = 2

1 2 2

= 2 erlang [channels] Z1 = 2 [channels] See Example 2.3.3 or Sec. 7.5.

INDEX Stream 2: The same as above (channels = connections). Question 2: State probabilities for Poisson stream: p(0) = 2 5

527

p(1) = 0 p(2) = 2 5

p(3) = 0 p(4) = 1 5

p(5) = 0

State probabilities for Engset stream: p(0) = p(1) = p(2) = p(3) = p(4) = p(5) = 1 63 6 63 15 63 20 63 15 63 6 63

528 Question 3: Convolving the two streams we get the following result: p(0) = p(1) = p(2) = p(3) = p(4) = p(5) = Question 4: We of course get the same result as in question 3. 2 217 12 217 32 217 52 217 61 217 58 217

INDEX

State x
2 x

Poisson q1 (x) q(x2) = q1 (x) 0 0 1 6 16 26 = = = = = 0 1 4 8


52 5 6 x

Engset q2 (x) 1 q(x1) 1 1 1 1 1 1 6 16 26


61 2 x1 x

Total q(x) 0 1 6 16 26
61 2

0 1 2 3 4 5 Total
2 1 2 2 2 3 2 4 2 5

1 q2 (x1) = q2 (x) 1 1 1 1 1 0 6 15 22
45 2

6 1 6 2 6 3 6 4 6 5

0 1 1 2 2 3 3 4 4 5

= 15 = 22 = =
45 2 93 5

29
217 2

INDEX Question 5: Stream one is a two-slot Poisson arrival stream, for which the PASTA property is valid: E1 = B1 = C1 = p(4) + p(5) = = Stream two is single-slot Engset trac: E2 = p(5) = 58 = 0.2673 217 61 + 58 217

529

119 17 = = 0.5484 217 31

Carried trac can be obtained from the contributions p2 (i) to the global state probabilities calculated in question 3: Y2 = = 2 217 1 6 + 2 15 + 3 22 + 4 93 45 +5 2 5

570 = 2.6267 217 3 570 A2 Y2 217 = A2 3 27 = 0.1244 217

C2 = =

Call congestion is obtained from the trac congestion by the general formula (5.49) = (7.46): B2 = =
Updated: 2010-03-22

(1 + ) C2 2 C2 = 1 + C2 1 + C2 27 = 0.2213 122

530
Technical University of Denmark DTUPhotonics, Networks group

INDEX
Teletrac Engineering & Network Planning Course 34 340

Exercise 9.19

(exam 1997)

QUEUEING SYSTEM M/M/3 We consider Erlangs classical queueing system M/M/3 having 3 servers and an unlimited number of queueing positions. Customers arrive according to a Poisson process with intensity = 2 customers per time unit, and the service time is exponentially distributed with intensity = 1 time unit1 . We assume the queueing discipline is FCFS (FIFO). The state of the system is dened as the total number of customers in the system. 1. Find the oered trac. Are the conditions for statistical equilibrium fullled? 2. Construct the state transition diagram, and nd the state probabilities when the system is in statistical equilibrium. 3. Calculate the probability for waiting time (Erlangs Cformula) by using the recursive formula for Erlangs B-formula for calculating the C-formula. The individual steps of the recursion shall appear in the solution. 4. Find (a) the mean queue length at a random point of time, (b) the mean waiting time for all customers, and (c) the mean waiting time for customers experiencing a waiting time > 0. 5. (Advanced question) Assume that the three servers are hunted in sequential order and nd the trac carried by each of the three servers (use the numerical values from Question 3). 6. Draw a phase-transition diagram for the response time (service time + eventual waiting time), and nd the mean value and form factor of this response time.

INDEX
Technical University of Denmark DTUPhotonics, Networks group

531
Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 9.19: Question 1:

(Exam 1997)

A=s=

1 = 2 1 = 2 erlang.

The conditions for statistical equilibrium are fullled because A < n (9.3). Question 2: The state transition diagram becomes as follows (cd Fig. 9.1):
 j  j  j  j  j  j

     

...

State probability p(0) is obtained by using (9.4) and becomes: p(0) = 1


2 =0

2 !

23 3!

3 32

1 9

The other state probabilities are given by (9.2): 1 2i , 9 i! p(i) = 1 2i , 9 3! 3i3 p(0) = 1 , 9 2 , 9 2 , 9 0i3 i3

Thus:

p(3) =

4 , 27 8 , 81 16 , 243

p(1) =

p(4) =

p(2) = etc.

p(5) =

532
Question 3: The probability of experiencing waiting time is given by (9.9): E2,n (A) = In this case we get: p{W > 0} = E2,3 (2) = 3 E1,3 (2) . 3 2 (1 E1,3 (2)) n E1,n (A) . n A {1 E1,n (A)}

INDEX

We want to obtain E1,3 (2) by means of the recursion formula (4.29): E1,x (A) = We get: E1,0 (2) = 1 , E1,1 (2) = E1,2 (2) = E1,3 (2) = 21 2 = = 0.6667 , 1+21 3 2 2/3 2 = = 0.4000 , 2 + 2 2/3 5 4 2 4/10 = = 0.2105 . 3 + 2 4/10 19 3 4 4 = = 0.4444 . 3 2 (1 4/19) 19 9 A E1,x1 (A) , x + A E1,x1 (A) E1,0 (A) = 1 .

The probability of experiencing a positive waiting time thus becomes: p{W > 0} = E2,3 (2) =

Question 4: The mean queue length is given by (9.12): L3 = E2,3 A 4 2 8 = = = 0.8889 . nA 9 32 9 L3 4 = = 0.4444 9

The mean waiting time for all customers is given by (9.15): W3 =

The mean waiting time for customers which experience a positive waiting time > 0 is obtained from(9.17): W3 s 4/9 w= = = =1 E2,3 nA 4/9

INDEX
Question 5:

533

The carried trac per channel for sequential hunting is given in the advanced part of the textbook: bi = yi + A (1 yi ) E2,n (A) , n

where yi is the trac carried by the ith channel in a loss system with sequential hunting (4.13): yi = Fn (A) = A [E1,n (A) E1,n+1 (A)]: y1 = A {E1,0 (2) E1,1 (2)} = y2 = A {E1,1 (2) E1,2 (2)} = y3 = A {E1,2 (2) E1,3 (2)} = Using the results obtained in Question 3 we get: b1 = b2 = b3 = 2 2 1 4 62 + = = 0.7654 , 3 3 3 9 81 8 2 7 4 272 + = = 0.6715 , 15 3 15 9 405 36 2 59 4 1444 76 + = = = 0.5629 . 95 3 95 9 2565 135 2 , 3 8 , 15 36 . 95

As a controlwe have b1 + b2 + b3 = 2 [erlang]. Question 6: We get the following phase diagram for the response time (it is convenient to put the service time before the waiting time, but it can easily be done in the right order):
4 9 1 w

... .. ................................. ................................. .. ..

=1

.... .. .................... ................... .................................................................. ................................................................ .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... .. .. . .. . . .. . .. .. .. .. . . .................................................... .. ............................................................................................................................................ . . .. . . ...................................................................................... .. ...

=1

5 9

p0 = 1,

p1 = 4/9, q1 = 1,

p2 = 0 q2 = 4/9, q3 = 0

534

INDEX

Referring to the notation in Sec. 2.3.3 and in Fig. 2.10 we get the following results. Formula (2.89) yields: 2 1 4 13 qi = + = m1 = = 1.4444 . i 1 9 9
i=1

The variance is obtained from (2.91):


2

2 = 2
i=1

= 2

1 1 + 1 1 13 9
2

i j=1

1 1 + 1 1

1 qi m1 2 j i 4/9 1

13 9

34 9

= 1.6914 ,

from which the form factor is obtained using (2.13): = 1+ = 1+ 2 , m2 1 1.6914 = 1.8107 . (13/9)2

The above procedure is general. It is simpler to consider the response time distribution as a parallel combination of an Erlang-2 distribution and an exponential distribution, all phases having the intensity one. The Erlang-2 distribution has the mean value 2, and the second moment is given by (2.51), which becomes 6. This branch has the weighting factor 4/9. The exponential distribution has the mean value 1 and the second moment 2. This branch has the weighting factor 5/9. The response time then has the mean value, 2. moment, and form factor as follows: m1 = m2 = = Updated: 2010-04-08 4 5 13 2+ 1= , 9 9 9 4 5 34 6+ 2= , 9 9 9 m2 = 1.8107. m2 1

INDEX
Technical University of Denmark DTUPhotonics, Networks group

535
Teletrac Engineering & Network Planning Course 34 340

Exercise 9.21

(exam 2000)

TWO QUEUEING SYSTEM IN PARALLEL We Consider a system composed of two subsystems, each made up of one server and one waiting position. Thus there may at most be 4 customers in the system. When a new customer arrives he chooses the subsystem with fewest customers. If there is the same number of customers in both systems, he chooses at random between the two subsystems. If both systems are busy, then the new customer is blocked (lost calls cleared). When a customer has chosen a subsystem, he stays in this system. Customers arrive according to a Poisson process with arrival rate > 0. All holding times are exponentially distributed with mean value 1/ ( > 0). The state of the system can be described by the following 6 states: (2, 2) (1, 1) (0, 0) (1, 0) (2, 1) (2, 0)

where the rst index denotes the number of customers in the subsystem having most customers, and the second index denotes the number of customers in the other system. 1. Construct the two-dimenional state transition diagram of the system. Is the state transition diagram reversible? 2. Set up the node balance equations of the system. In the following we consider the case = = 1 [time unit1 ]. 3. Find the state probabilities under the assumption of statistical equilibrium. (Hint: p(0, 0) =
7 21

and p(1, 1) =

3 21 ).

4. What is the proportion of customers experiencing a. Immediate service? b. Waiting before service? c. Blocking? 5. When the system is in state (2,0) one server is idle, even though a customer is waiting in queue. Which benets would be obtained by moving the waiting customer to the idle server?

536
Technical University of Denmark DTUPhotonics, Networks group

INDEX
Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 9.21

(exam 2000)
.......... ........... .. .. . .. . . .. .............. . ............................. ............................ .. . . .. . .. . .. .. . ............... . .. .. .. ........... .. ........

... .. .. .. .............. ............... ... ... .... ..... .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .............................. .............................. .. . . .. .. ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... ... .. .... .... .. ... ............... .............. ... .. .. ....

.......... ........... .. .. . .. . . . ............. .. . ............................ ........................... ... . . . .. . . .............. .. . .... . .... . . .. . .. ... .... .. ....... .. .....

Question 1: The state transition diagram has in all states an arrival intensity and a departure intensity equal to the number of busy servers: The state transition diagram is not reversible as we e.g. may transit from state (2,0) to (1,0), but not in the opposite direction.
  

22

 11
i  

  j  

21


i 

  

00

  j 10 ' 

20

Question 2: The node balance equations become:

(0, 0) : (1, 0) : (2, 0) :

p(0, 0) = p(1, 0) ( + ) p(1, 0) = p(0, 0) + 2 p(1, 1) + p(2, 0) ( + ) p(2, 0) = p(2, 1)

INDEX

537

(1, 1) : (2, 1) : (2, 2) :

( + 2 ) p(1, 1) = p(1, 0) + p(2, 1) ( + 2 ) p(2, 1) = p(2, 0) + p(1, 1) + 2 p(2, 2) 2 p(2, 2) = p(2, 1)

Question 3: (0, 0) : (1, 1) : (2, 0) : (2, 2) : = p(1, 0) =


1 3 3 7

= p(2, 1) = 3 p(1, 1) p(1, 0) = = p(2, 0) = = p(2, 2) =


1 2 1 2

1 3

2 21

p(2, 1) = p(2, 1) =

1 21 2 21

The result thus becomes, arranged in the same way as the state transition diagram: p(2, 2) = p(1, 1) = p(0, 0) =
7 21 3 21 7 21 1 21 2 21 1 21

p(2, 1) = p(2, 0) =

p(1, 0) =

Question 4: p{immediate} = p(0, 0) + p(1, 0) + p(2, 0) = p{waiting} p{blocking} = p(1, 1) + p(2, 1) = p(2, 2) = =
15 21 5 21 1 21

Question 5: We get a system with full accessibility and the following state transition diagram:
     j j j j

    

538
The state probabilities for this system are: p(0) = 8 168 = , 23 483 168 8 = , 23 483 4 84 = , 23 483 42 2 = , 23 483 1 21 = . 23 483

INDEX

p(1) =

p(2) =

p(3) =

p(4) =

In comparison with the system above we nd the following changes:

p{immediate} : p{waiting} : p{blocked} :

345 336 15 = = 21 483 483 5 115 126 = = 21 483 483 23 21 1 = = 21 483 483

Drawbacks:

The probability for immediate service is reduced by 11 483

9 . 483

The probability for waiting time is increased by Advantages: The blocking probability is (only) reduced by

2 . 483

We thus obtain a better utilization of the system. Furthermore, we reduce the mean waiting time 2 for calls which experience waiting from w = 1 to w = 3 . Given a customer experiences waiting time, he arrives either in state 2 or 3. The probability of state 2 is twice the state probability of state 3. Two out of three waiting customers arrive thus in state 2, where the waiting time is 0.5, whereas one out of three waiting customers arrrives in state 3, where the mean waiting time is 1. Revideret 2003-03-18

INDEX
Technical University of Denmark DTUPhotonics, Networks group

539
Teletrac Engineering & Network Planning Course 34 340

Exercise 9.22

(exam 2001)

WAITING TIME SYSTEM WITH HYSTERESIS We consider a pure delay system where customers arrive according to a Poisson process with intensity = 2 customers per time unit. The service time is exponentially distributed with mean value 0.5 time units. We assume the queueing discipline is FCFS = FIFO. 1. Find the oered trac. The customers are served by one or two servers. There is always at least one server available. The other server opens (start serving customers), when the queue length becomes two. When there are no more customers in the queue, the server which rst becomes idle, is closed. The queue length is dened as the number of waiting customers and does not include customers being served. We dene the state of the system as (i, j), where i is the total number of customers in the system, and j is the number of open servers. Therefore, we obtain the state transition diagram with the states and structure shown in the following gure.
............ ............. .............. .............. .............. ............... .... .. .... ....... ...... .... ........... ......... .... ........... ......... .... .... . .......... ........ .... .. ... .............. ... . .................. .................... .. ... ..... .. ..... ..... .... . .... . ... . .. .. ... . . .. .. .. .. .. .. .. .. .. . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . ... .. .. .. ... . . ..... ............ ....... ......... ............ ......... .... ....... ........................ ........................ . .. ... .... . .. ... ..... ... .... ..... .... ...... ...... ...... ...... .. . ... . .. .... .. .. ................. ................. ............... ............... ............... .............. .. .. .. .. .. .. .. .. .. . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ................. .................. .. ................... ................... .. ..... . . .......... ....... .... ............... .... .............. ........ ... ............... ... ........ ..... ....... ...... ............ .... ... ... .. ... ... . . . .. .. .. ... .. ... .. .. .. .. .. .. .. .. . .. .. .. . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. .. . . . .. .. . .. .. . . . .. .. .. .. ... ... .. .. .. ... ... .. ... . . . .... .. ............... .. ................ .... ................... .... .............. .... .............. ....... ........... . ..... ........ ............ ........ ............ ......... .......... ......... .......... ... ...

22

32

42

01

11

21

2. Construct the state transition diagram of the system. 3. Find the state probabilities under the assumption of statistical equilibrium. 2 1 (Hint: p(0, 1) = 7 , p(2, 2) = 14 ). 4. Find the average queue length using the state probabilities. 5. Give the waiting time distribution of a customer who arrives in a state (i, 2) , i 3. 6. Give the waiting time distribution in the form of a phase diagram for a customer, which arrives in state (1, 1) and brings the system into state (2, 1), and nd the mean waiting time of the customer. (Hints: Next event takes place after a known time interval and is either a departure or an arrival. In the rst case the waiting time terminates. In the last case the second server starts operating, and as the queue discipline is FCFS = FIFO, we know the remaining waiting time).

540
Technical University of Denmark DTUPhotonics, Networks group

INDEX
Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 9.22 Question 1:

(exam 2001)

If we denote the mean service time by s the Oered trac becomes: A = s = 2 0.5 = 1 [erlang] . Question 2: The state transition diagram becomes as follows: The arrival rate is always . All states with one
................ ................. ....... ..................... ....... ..................... . . . .................. ........ ..................... .......... ...... .......... ...... ..... ..... ..... .... ....... ....... ....................... ....... ..... ... .. ... .. .. .. .. .. .. .. .. .. .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. . .. .. .. .. ... .. . .. ... . .. . ... . .. . .. .................... ................... ....... ... .. ......... .. ..................... .................... .. . . . ........ .. ......... . .................. . ................... .................. ................. ................. .... . .................. . .. ... .. .. .. ... .. .. . .. .. .. .. . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. ... .. .. . . ................. ......... ...................... ....... ... .. .................. .......... ...................... ........ ... .. .. . ... ................ ....... . . .... .. .... ....... . .... ... ............ .... . .. . .. . ... .. .. .. ... . ... .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . .. .. .. .. .. .. . . . ... ... ... .. .. ... .. ..... ..... ...... . . ...... .......... . .. ...... ...... . . . . .......... .. .. ......... . .. ... .. ... .. . ...... .................. ..... .. ..................... ..... .. .... ................. . ..........

22

32

42

01

11

21

server have departure rate , and all states with two servers have departure rate 2. Question 3: The node balance equations yield ( = = 2 events/time unit):

Node (0,1):

2 p(0, 1) = 2 p(1, 1) , p(1, 1) =


2 7

Node (2,1):

4 p(2, 1) = 2 p(1, 1) , p(2, 1) =


1 7

Node (2,2):

6 p(2, 2) = 4 p(3, 2) , p(3, 2) = p(4, 2) = p(5, 2) =


3 28 , 3 56 , 3 112 ,

INDEX

541

or in general: p(i, 2) =

3 , 7 2i1

i 3.

where the last equations are obtained by simple cut equations. The tail probabilities are a quotient series with factor 1/2. As a control we have:

p(0, 1) + p(1, 1) + p(2, 1) + p(2, 2) +


2 7

i=3 p(i, 2)
3 28 1 1 2

= = 1.

2 7

1 7

1 14

Question 4: The average queue length is obtained from the state probabilities: L = 1 p(2, 1) + =
i=3

(i 2) p(i, 2) 1 1 1 + 3 + 4 2 4 8
i=2

1 3 3 + + 7 28 28 3 3 1 + + 7 28 28

xi
x= 1 2

1 3 3 + + 7 28 28 x

x2 1x

x= 1 2

1 3 3 2x x2 + + 7 28 28 (1 x)2 1 3 3 + + 3 7 28 28 4 . 7

x= 1 2

= =

(The most important is the rst equation based on the denition). Question 5: As we have FCFS queueing discipline, a customer arriving in state (i, 2) , i 3 , bringing the system into state (i + 1, 2) experiences an Erlang-(i1) distributed waiting time. (This answer is sucient). Additional: If we consider an arbitrary one of the customers arriving in states (i, 2), i 3 , then the waiting time

542

INDEX

becomes Coxdistributed with constant branching probabilities after the second phase as shown in the following gure (we have to exclude state (2, 2) because the probabilility of this state is not twice the probability of state (3, 2)): Thus customers arriving in state (3, 2) leave (end waiting) by
1 2 1 2 1 2

.. . .. .................... .................... .. .

.. . .. ..................... .............. ..... .. ..

... .. . .. .. ........................... .......................... . .. . .............. ......... .... .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . ... ... .. . . .. . .. . . .. . ......................................................... .. . ........................................................ . .. ..

.......................... ..................... ..... . .. .. ........................... .......................... . .. . .. .. .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . 2 . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... . ... ... . .. .. . .. . .. . .. .. . . .. . .............................................................................. ....... .................................................. .............. ..... . .... . ... . .. .. .. ....

the rst branch, customers arriving in state 4 leave by the second branch, etc. This Coxdistribution is equivalent to a single exponential distribution with intensity 2 in series with an exponential distribution with intensity as the latter is a weighted sum of Erlang-k distributions with geometric weight factors.
. .................... .................... .. .. .. .

.. ..................... .............. ..... .. .. .. .

.. ..................... .............. ..... .. . .. .. .

Question 6: We call the customer bringing the system into state (2,1) for the tagged customer. The waiting time of the tagged customer terminates when the next event occurs. The next event is either a departure or an arrival. If a customer departs, then the tagged customer starts service. If a new customer arrives, then the second server opens and the tagged customer is served (FCFS). So in both cases the tagged customer initiates service. So the waiting time is exponentially distributed with intensity + = 4: F (t) = 1 e4t , The phase diagram becomes:
.. ................................. ........................... ..... .. .. .. .

t 0.

.. ................................. ........................... ..... .. .. .. .

Updated: 2005-03-03

INDEX
Technical University of Denmark DTUPhotonics, Networks group

543
Teletrac Engineering & Network Planning Course 34 340

Exercise 9.25 Preferential trac

(Exam 2006)

We consider a system with full accessibility and n = 4 servers. Two dierent types of customers arrive according to Poisson processes. Type one customers arrive with rate 1 = 1 customer per time unit. If all servers are busy, then type one customers wait in an innite queue until they are served. (These customers are preferential customers, for example hand-over calls in wireless systems). Type two customers arrive with rate 2 = 2 customers per time unit. If all servers are busy, then type two customers are blocked (Lost-Calls-Cleared). (These customers are ordinary customers, for example new calls in wireless systems). All customers have the same mean service time 1 = 1 [time unit]. 1. Find the oered trac A1 for type one, A2 for type two, and the total oered trac. Which restrictions should Ai (i =1, 2) fulll for ensuring statistical equilibrium? 2. Construct a one-dimensional state transition diagram when the state of the system is dened as the total number of customers in the system (either being served or waiting). 3. Assume statistical equilibrium and nd the state probabilities. Find the probability that a type one customer is delayed. a type two customer is blocked. 4. Find the mean queue length, the mean waiting time for all customers of type one, and the mean waiting time for delayed customers of type one. 5. Assume the queueing discipline is FCFS. Write down the waiting time distribution for delayed type one customers (use the analogy with the state transition diagram of an Erlang-C system when all servers are busy).

544
Technical University of Denmark DTUPhotonics, Networks group

INDEX
Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 9.25

Exam 2006

Question 1: By denition the oered trac is the average number of calls per mean service time: 1 = 1 [erlang] 2 = 2 [erlang]

A1 = A2 =

At = A1 + A2 = 3 [erlang] We have the following restrictions on the oered trac: 0 A1 < 4 0 A2 < A1 has to be less than the number of channels as this trac is waiting until it is served. A2 may have any value as it is lost when all channels are busy. This is also seen from the following state transition diagram Question 2: The state transition diagram becomes as follows:
 j  j  j  j  j  j  j  j

       

...

Question 3: The relative state probabilities q(i) = p(i)/p(0), respectively the absolute state probabilities p(i) become as follows:

INDEX
q(0) = q(1) = q(2) = q(3) = q(4) = q(5) = q(6) = q(7) = q(8) = q(9) = ... Total = 1 3
9 2 9 2 27 2 4 27 2 42 27 2 43 27 2 44 27 2 45 27 2 46 35 2

545
p(0) = p(1) = p(2) = p(3) = p(4) = p(5) = p(6) = p(7) = p(8) = p(9) = ... Total = 1
2 35 6 35 9 35 9 35 27 140 27 560 27 2240 27 8960 27 35840 27 143360

We obtain the total sum of q(i)s as follows. The tail of the distribution q(i) is a geometric series. If we sum the terms from q(4) we get (cf. the derivations leading to (9.28)):
j=4

q(i) =

27 8

1+

1 + 4

1 4

1 4

1 27 8 1 9 2

1 4

When all channels are busy, calls of the rst trac stream are delayed, and calls of the second stream are lost: p{delay type 1} = p{block type 2} =
i=4

p(i)

9 2 2 35 9 35

546
Question 4: The mean queue length becomes: L =
i=5

INDEX

(i 4)p(i)

= 1 p(5) + 2 p(6) + 3 p(7) + 3 48 = [time units] 560 35 This may be obtained as follows. The probability of a queue length greater than zero is: = p{L > 0} =
i=5

p(i) =

9 140

(13.19)

Given that we have a positive queue, the mean value of these states is the mean value of a geometric distribution (Table 3.1) starting with class one and p = 1 1 = 3 . This can also be obtained using 4 4 (9.13) with n = 4 and A = 1 (when we have queue). The mean value thus becomes: L4 = Lnq p{L > 0} 3 4 9 = 3 140 35 The arrival rate of type one customers is 1 = 1. Using Littles theorem we nd the mean waiting time for all customer: 1 3 W = L= 1 35 The probability of delay D1 for type one customers was obtained in Question 3, and we nd the mean waiting time for delayed customers: = w = = 35 3 1 W = D 9 35 1 [time units] 3

Question 5: Comparing the state transition diagram of Fig. 9.1, the waiting distribution for delayed customer (9.28), and the waiting time distribution for all customers (9.28), we conclude that the waiting time distribution is an exponential distribution with mean value (n 1 ) = 3 (in agreement with the above result. The waiting time for delayed customers becomes: F (t) = 1 e3 t , t 0. t 0.

The waiting time distribution for all customers becomes: F (t) = 1 Updated 2010-04-21 9 e3 t , 35

INDEX
Technical University of Denmark DTUPhotonics, Networks group

547
Teletrac Engineering & Network Planning Course 34 340

Exercise 9.26

(Exam 2007)

Palms call center model We consider a queueing system with full accessibility, n servers, and an unlimited number of queueing positions. The system is oered PCT-I trac with arrival intensity [calls/time unit] and mean service time 1 [time units]. A customer waiting in the queue has limited patience, and chooses to leave (renege) the queue with a constant rate r [time unit1 ]. Thus a customer gives up waiting after an exponential time interval with mean value 1 [time units] when the waiting time becomes longer r than this time interval. Let the state of the system be dened as the total number of customers which are being served or are waiting. 1. Construct the one-dimensional state transition diagram of the system. 2. Find the state probabilities expressing all states by state p(0) under the assumption of statistical equilibrium. 3. Which classical trac models correspond to: (a) r = 0 ? (b) r = ? (c) r = ? Which restrictions must be fullled by and in each of the three cases for the system to enter statistical equilibrium? 4. Show that the probability that a random call attempt experiences a positive waiting time is given by: An D = p(0) (1 + Q) = n! An (1 + Q) n! , n1 An A + (1 + Q) ! n!
=0

where

Q =

i=1 i j=1

Ai n+j r

and

A =

548

INDEX

5. Find, expressed by state probabilities, the average queue length L at a random point of time. 6. Find, expressed by L, the proportion of call attempts which become impatient and leave the queue. The following question was not included at exam: 7. Find, expressed by L, the average waiting time for all customers. Find the average waiting time for waiting customers (which are either served or leave the queue).

INDEX
Technical University of Denmark DTUPhotonics, Networks group

549
Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 9.26 Question 1: The state transition diagram of the system becomes as follows:
  

   s s s

 

n
Question 2:

  s s

n+1



n + r

n + 2r

n + ir

 s

n+i

n + (i + 1)r

Under the assumption of statistical equilibrium the simplest way to obtain the state probabilities are from cut equations: Cut Cut 0 1 : 1 2 : . . . . . . p(0) = p(1) . . . p(1) = 2 p(2) p(i1) = i p(i) . . . p(n) = (n + r ) p(n+1) . . . p(n+i1) = (n + ir ) p(n+i) . . .

Cut

(i1) i : . . . . . . n (n+1) : . . . . . . . . . . . .

Cut

Cut (n+i1) (n+i) :

These equations are supplemented with the normalization condition:


i=0

p(i) = 1.

550
As we have the oered trac A = / we nd: p(i) = p(n + i) = Ai p(0) i! Ai
i j=1

INDEX

0in p(n) 0<i

r n+j

The above with a remark on normalization is the answer. By normalization we nd p(0): 1 = p(0) 1 + A + + p(n) 1 +
n1 =0

A2 An1 + ... + 2 (n 1)!


i=1 i j=1

i r

n+j .

1 = p(0) Then (ref. Quest. 4) p(0) =

An A + (1 + Q) ! n!

1
n1 =0 i=1 i j=1

An A + (1 + Q) ! n! Ai r n+j

where

Q =

Question 3:

a) r = 0 : In this case the customers have innite patience and never gives up waiting. This corresponds to Erlangs classical waiting time system (Sec. 9.1). b) r = : This corresponds to the Poisson distribution (Sec. 4.2), as the waiting positions serve customers with the same rate as the servers. The state probabilities thus correspond to the state probabilities of M/G/. c) r = : The customer gives up waiting immediately and this corresponds to the classical ErlangB loss system (truncated Poisson, Sec. 4.3).

INDEX

551

Statistical equilibrium is only attained if the departure rate is bigger than the arrival rate for all states above a state [(n + i)]: n + i r > . For case (b) and (c) where r > 0 this will be fullled for all values of i > k, where k is a constant, and we thus always attain statistical equilibrium. For r = 0 we only attain statistical equilibrium when n > , or A < n, which is the condition for statistical equilibrium in Erlangs waiting time system. These conditions correspond to that Q must be nite. Question 4: The probability D for experiencing waiting time is equal to the probability for arriving in a state [n+i], where i 0: D=

p(n + ) .

=0

When the arrival process is a Poisson process, time averages are equal to call averages (PASTAproperty): D = p(0) An (1 + Q) , n! (see Question 2)

D =

An (1 + Q) n! . n1 An A + (1 + Q) ! n!
=0

Question 5: The average queue length at a random point of time is equal to the trac carried by the queueing positions:
n1

L = 0 L =

p(i) +
i=0

i=0

i p(n + i)

i=n

(in) p(i) .

Question 6: Customers leave the queue with drop-out rate r . The average number of customers which drop-out of the queue per time unit is therefore L r . The proportion of all customers dropping out of the queue becomes: L r D2 = .

552
By considering all possible states we get directly:

INDEX

D2 =

x=n+1

(x n) r p(x)
x=0

p(x)

r L

q.e.d.

Question 7: (added after exam) According to Littles law (Sec. 3.3) we have for all customers: L=W , and the mean waiting time W for all customers: W = 1 L.

The mean waiting time w for customers who experience a positive waiting time becomes: w= w= where D has been obtained in Question 4. W , D

L , D

L, and thus W , w, D2 etc., can be expressed by Q in the same way as D. It can be shown that: L=D n Q 1 r A 1+Q

The above model is widely applied for Call Center planning and was rst dealt with by Conny Palm i 1937: Palm, C. (1937): N agra underskningar ver vntetider vid telefonanlggningar. Tekniska Medo o a a delanden fr Kungl. Telegrafstyrelsen, 1937, No. 79, pp. 109127. an Palm, C. (1937): Etude des dlais dattente (Some investigations into waiting times in telephone e plants). Ericsson Technics 1937, Nr. 2, pp. 39-56. In rst edition of R.B. Coopers bog: Introduction to Queueing Theory (New York 1972, 277 pp.) the model is mentioned on p. 100, exercise 18, as R.I. Wilkinsons j factor, and he refers to an unpublished work from 1937. No doubt Wilkinson has adopted the idea from Palm. 2008-04-14

INDEX
Technical University of Denmark DTUPhotonics, Networks group

553
Teletrac Engineering & Network Planning Course 34 340

Exercise 10.19

(Exam 1997)

LEAKY BUCKET: M/D/1/2 QUEUEING SYSTEM Background: Leaky Bucket is a mechanism for controlling the cell (packet) arrival process of a connection in an ATM-system. The mechanism corresponds to a queueing system with constant service time (cell length = 53 bytes)) and a limited buer. If the arrival process is a Poisson process, then we have an M/D/1/k system. The size of the leak corresponds to the average arrival rate accepted in the long run, whereas the size of the bucket (buer) denotes the excess allowed during a short time interval. When implemented in an ATM system the mechanism operates as a virtual queueing system, where a cell is either accepted immediately or rejected. A counter indicates the value of the load function. A contract between the operator (network) and the user (connection) agrees upon the size of the leak and the bucket, and based on this information the network is able to guarantee a certain quality-of-service. Exercise: We rst consider the queueing system M/D/1, which has Poisson arrival process with intensity = 0.6931 calls per time unit, constant service time which we choose as time unit, and one server. The number of queueing positions is unlimited, and we assume that the system is in statistical equilibrium. 1. Find the rst state probabilities p(0), p(1) and p(2) (notice that e0.6931 = 2). We now assume that there is only one queueing position (M/D/1/2). 2. Find from the state probabilities in Question 1 by applying Keilsons formul in Sec. 10.3.4 the state probabilities p2 (0), p2 (1) og p2 (2) in the nite system. 3. What is the probability that a call: (a) is served immediately? (b) is delayed before service? (c) is rejected? 4. Find by using Littles theorem the mean waiting time for customers which experience a positive waiting time. 5. What is the probability that a busy period (a period where the server is busy) has the duration one time unit? 6. Find the probability that a busy period has the duration i time units.

554
Technical University of Denmark DTUPhotonics, Networks group

INDEX
Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 10.19

(Exam 1997)

The constant service the is chosen as time unit (s = h = 1) and = 0.6931. We then nd A = s = 0.6931. Question 1: For a M/D/1 system we obtain the state probabilities p(0), p(1) and p(2) from the formul in Sec. 10.4.2: p(0) = 1 A = 0.3069 , p(1) = (1 A)(eA 1) = 0.3069 , p(2) = (1 A)(e2A eA (1 + A)) = 0.1884 . Question 2: We now consider an M/D/1/2 system, and we apply he formul in Sec. 10.3.4. First we nd Q2 (10.11): Q2 =
j=2

p(j) = 1 p(0) p(1) ,

Q2 = 0.3862 . Then the state probabilities of the truncated system p2 (0), p2 (1) and p2 (2) are obtained by rescaling as given in (10.9) and (10.10) : p2 (0) = p2 (1) = p2 (2) = p(0) 1 A Q2 p(1) 1 A Q2 = = 0.3069 = 0.4191 , 1 0.6931 0.3862 0.3069 = 0.4191 , 1 0.6931 0.3862

(1 A) Q2 0.3069 0.3862 = = 0.1618 . 1 A Q2 1 0.6931 0.3862

These state probabilities of course add to one. We may also apply the solution given in Sec. 10.4.8. Using Frys equation of state for state zero we

INDEX
get: p2 (0) = {p2 (0) + p2 (1)} p(0, h) = {p2 (0) + p2 (1)} e0.6931 , p2 (0) = {p2 (0) + p2 (1)}/2 , p2 (1) = p2 (0) . From (10.37) we get: A = 1 p2 (0) + A p2 (2) , 0.6931 = 1 p2 (0) + 0.6931 p2 (2) , p2 (2) = 1 1 p2 (0) + . 0.6931 0.6921

555

Using the normalisation restriction (10.36) we nd p2 (0): p2 (0) + p2 (0) + 1 1 p2 (0) + = 1, 0.6931 0.6921

p2 (0) = 0.4191 , and thus the same results as above. Question 3: The probabilities questioned become: 1. p{immediate service} = p2 (0) = 0.4191 , 2. p{waiting before service} = p2 (1) = 0.4191 , 3. p{blocking} = p2 (2) = 0.1618 . Question 4: Customers arriving when the system is in state 0 are served immediately, and customers arriving when the system is in state 2 are rejected. Thus only customers arriving in state 1 experience a positive waiting time > 0. The arrival intensity of these customers is x = p2 (1). The average queue length is given by: L = 0 p2 (0) + 0 p2 (1) + 1 p2 (2) = p2 (2) . We thus nd: w= L 0.1618 = = 0.5570 . x 0.6931 0.4191

556
Question 5: The probability that a busy period has the duration one (and only one) time unit is: p{busy period = one time unit} = p{no arrivals during one time unit} = 1 F (1) = 1 (1 e1 ) = e0.6931 = 0.5 . This is the same as class zero in a Poisson distribution with mean value 1. This result is used in the following question. Question 6:

INDEX

We notice that there is only one queueing position. Therefore, at least one customer must arrive during each service time (only the rst is accepted) to maintain a busy period. The number of arriving customers during a service time is independent of the number of customers arriving during other service times Immediately after start of a new service time, the queueing position is idle. We dene pa and pb as: pa = p{during the rst i1 periods at least one customer arrives per period} = (1 p{busy period = 1 time unit})i1 = 0.5 i1 , pb = p{no arrivals during ith period} = p{busy period = 1 time unit} = 0.5 . The probability that a busy period has the duration i time units then becomes: p{busy period = i time units} = pa pb , = 0.5 i1 0.5 , = 0.5 i .
Updated: 2007-04-18

INDEX
Technical University of Denmark DTUPhotonics, Networks group

557
Teletrac Engineering & Network Planning Course 34 340

Exercise 10.22

(Exam 2004)

Airport priority queueing system Let us consider an airport where the server is a single runway with Poisson arrival processes. The trac during one morning rush hour consists of arriving aircrafts and departing aircrafts both using the runway as follows: a. Landing: 10 aircrafts arrive per hour. The mean service time is m1,1 = 2 [minutes] and second moment of the service time is m2,1 = 6 [minutes2 ]. b. Starting: 20 aircrafts departs per hour. The mean service time is m1,2 = 1.5 [minutes] and second moment of the service time is m2,2 = 3 [minutes2 ]. 1. Find the oered trac for each type, and the total oered trac. 2. Find the total arrival rate and the mean service time for all aircrafts and use this to control the total oered trac. 3. Find the mean waiting time for an arbitrary aircraft when there is no priority. 4. Find the mean waiting time for both types when landing aircraft have non-preemptive priority over starting aircrafts. Show that the conservation law is fullled for this system.

558
Technical University of Denmark DTUPhotonics, Networks group

INDEX
Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 10.22

(exam 2004)

Priority queueing systems are dealt with in Sec. 10.6. Question 1: We have to choose a common time unit and in the following we choose minutes. Landing: The arrival rate is 1 = 1/6 aircrafts per minute, and the mean service time is s1 = 2 minutes. So the oered trac becomes: 1 A1 = 1 s1 = [erlang] 3 Starting: The arrival rate is 2 = 1/3 aircrafts per minute, and the mean service time is s2 = 1.5 minutes. So the oered trac becomes: 1 A2 = 2 s2 = [erlang] 2 The total oered trac becomes: A = A1 + A2 = 5 6 [erlang]

We notice that the total oered trac is less than one erlang, so the system is in statistical equilibrium. Question 2: The total arrival rate is obtained by adding the two arrival rates (10.54): = 1 + 2 = = 1 1 + 6 3 1 2

The total mean service time is obtained by weighting the mean values with the relative number of calls of each type (10.55). Considering one minute we nd: s = = s = 2 1 s1 + s2 1 2 2 + 1.5 3 3 5 3 [minutes]

INDEX
From the total process we nd the total oered trac: A = s= A = Question 3: 1 5 2 3 q.e.d

559

5 [erlang] 6

To nd the mean waiting time for all customers in a single server system we use PollaczekKhintchines formula (10.3) or (10.2). We rst have to nd the second moment m2 or the form-factor of the total trac process. The second moment is obtained by weighting as for the mean value (10.56) (random variables in parallel). We nd: m2 = = 1 2 m2,1 + m2,2 2 1 6+ 3 3 3 minutes2

m2 = 4 The parameter V = V1,2 becomes: V

= =

m2 2 2
1 2

= 1

We could of course also obtain V by summation over all trac classes (10.59):
2

=
i=1

Vi = V1 + V2 = V1,2 1/6 1/3 6+ 3 2 2

= 1 Pollaczek-Khintchines formula (10.3) yields: W W = V 1 = 1A 1 [minutes]


5 6

= 6

560

INDEX

This is the mean waiting time for all aircraft. If we only consider delayed aircrafts, then the mean waiting time is: W w= = 7.2 A We get of course the same if we use (10.2). The form-factor becomes: = 4 36 m2 = = 2 2 s (5/3) 25

Question 4: We now consider a non-preemptive queueing system where landing aircrafts have priority over starting aircrafts. The formul for this are given in Sec. 10.6.3. For highest priority class the mean waiting time becomes (10.66): W1 = = V 1 A1 1 1 3 2
1 3

W1 = For lowest priority we nd (10.69): W2 =

[minutes]

V W1 = (1 A1 )(1 (A1 + A2 )) 1 (A1 + A2 ) [minutes]

W2 = 9

The conservation law (10.63) for this system relates the mean waiting time W without priority to the mean waiting times W1 and W2 with non-preemptive priority discipline: AW = A1 W 1 + A2 W 2 1 3 1 + 9 3 2 2 1 9 + 2 2 q.e.d.

5 6 = 6 5 = Thus the conservation law is fullled Updated: 2009-05-04

INDEX
Technical University of Denmark DTUPhotonics, Networks group

561
Teletrac Engineering & Network Planning Course 34 340

Exercise 10.24

(Exam 2007)

IPP/M/1 queueing system We consider a GI/M/1 single server queueing system with Interrupted Poisson arrival process (IPP) and an innite number of waiting positions. The exponential service times have mean value 1 [time units]. The IPP has On O rate [time units1 ] and O On rate [time units1 ]. The arrival rate of the Poisson process during On-periods is calls per time unit. Note that almost all questions are independent of previous questions.

1. Find the oered trac. The states of the system are dened as (i, j) where i is number of customers in the system (being served or waiting) (i = 0, 1, . . .), and j is the state of the arrival process (j = a (On), or j = b (O )). The structure of the state transition diagram is shown in the following gure:
................. ..... ..... .................. .................. ......... .................... . ........... ........... ......................... . .............. ......................... . .............. . ........................... ............... .................................. ........ ..... .... ..... ..... ..... ............ ..... ..... ..... ............ ..... .. .. ..... ... ... ..... . ....... ........... .... ...... ... .... ..... ........ .... .. . . . .. . . .. .. . ...... ..... ...... ..... ...... ..... . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . .... . . .. . ... .. . . . . . ...... ..... .. ... ... ...... ... .... . ... ..... ... .... .... ....... ....... .... .... .. .......... ... ..... ..... ..... ..... ..... ...... ............ .. ............. .. ... ......... .. ................ ............... ....... ........ ................ ............... ............ . .............................. ........................ .. ........... . .............................. . . .... . .......... ....... ........... ................ . .................. ... ... . . ........... . . . .. ...... .. .. ....... .. .. ..... .. ... ... ... ... . . . . ... ... ... ... . . . .. .. . .. . . . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... . . .. .. .. .. . . . ... ... . ... ... .. ....... ... .. ....... ... . . . .. .. ........ . ........ ... . .. .......... .. .......... . . . .. .. ..... .... ..... . . . . ... ... ... ... .. .. .. .. .. ... ... ... ... .. .. .. ... ... ... ... .. .. .. .. .. .. .. .. .. . . . .. .. .. .. . . . . . . . . . . . . . . .. . . . . . .. . . . ..................................... .................................... .. ................................... .. ................................... .. ................................. . . ..................................... . . .. .... ............................... . . . . . . . . .. . . ................................. . . . .. . . . . . .. . ... . ... . ... . . . . . . . . . . .. . . . . . .. .. .. .. . . . . .. .. .. ... ... .. ... ... ... .. .. .. ... .. .. .. ... .. ... ........... ........... ........... ........... ........... .......... .......... ..........

0, a

1, a

2, a

3, a

0, b

1, b

2, b

3, b

2. Fill in all transition rates in the state transition diagram. We assume that we know the state probabilities p(i, j) (time average) of the above system. The probability that a customer immediately before arriving observes the system in state i is denoted by (i) (call average). Notice that a customer always arrives in states (i, a). 3. Find (i) expressed by p(i, j). We now let (0) = 21 and = 1 [time unit1 ] (and = = = 1 [time unit1 ]).

562

INDEX

4. Find the state probability distribution (numerical values) of (i) applying the theory for GI/M/1. 5. Find the mean waiting time W for all customers, and the mean waiting time w for delayed customers. The following question was not included at exam: 6. Find the waiting distribution of a delayed call.

INDEX Technical University of Denmark DTUPhotonics, Networks group

563 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 10.24 Question 1:

(exam 2007)

The oered trac is / when the IPP-process is on, and zero when it is o. We nd

A =

p(on) p(on) + p(o)


1 1

Question 2: Remember there is only one server.

.................. ......... ...................... . ............ ...................... . ........... ........... .......................... . ............... ............................. ................ .................................. ............... .................................. ............... .................................. ........ . ..... .... .. .......... .. .......... ..... .. ..... ... ........... ...... ... ......... .... ... ... . .... . .. . ..... ..... .... . ...... ...... ..... ..... .... . . ... .. .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . .. .... .. ... . ..... .. .... . . .. . .. . .. ..... .. .. .. ... ..... ... .. ... ...... .... . ..... ... .... .... .... ... ... ..... ..... ..... ..... ..... ...... ...... ............. . ... ..... ...... ............. ..... ...... ...... .............. ... .............. . . .............. .. ... ........... . .... . ............ . .............................. ... ... . .......... . ............................. . . .... . ................. ................. ................ ................ ............... ................ ....... .... . . . .. ....... .. .. ....... .. . .... .. ... ... ... ... . . . . ... ... ... ... . . . .. .. . .. . . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . ... ... . ... ... . ..... ... . ..... ... .. . . .. . .. .. ..... ... . ........ ... . . . ........ ... . ........ ... .. . . . ... . . . ... . . . ... ... . . .... ..... .... ..... .... ...... .... ..... .. .. ... ... ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . .. . . .. . . ..... .. .. . . . . . . . ..................................... . . . ..................................... . . . . . . . . ................................ . . . ................................ . ..................................... . ................................... ..................................... .................................... . . . . . . . .. . .. . .. . .. . . . . . . . . . . . .. .. .. . . . . .. .. .. .. .. .. .. .. ... .. ... ... .. .. .. .. ... ... ... ... . ... ............ ............ ............ ........... ........... ........... ........... ..........

0, a

1, a

2, a

3, a

0, b

1, b

2, b

3, b

Question 3: Per time unit we have p(i, a) , i = 0, 1, 2 . . . , call attempts in state p(i, a). There are no call attempts in states p(i, b) when the IPP-process is o. Thus the proportion of call attempts

564 ariving in state (i, a) becomes: (i) =


j=0

INDEX

p(i, a) p(j, a) p(i, a) , p(j, a)

j=0

+ p(i, a) .

as the denominator is the probability that the IPP process is on. Question 4: The system considered is a GI/M/1 system and in Section 10.5.2 (10.43) it is given that the state probabilities just before an arrival are geometrically distributed: (i) = (1 ) i , We know that (0) = (1 ) = (i) = i = 0, 1, 2, . . . 2 = 0.5858, and

2 1. Thus = 2 2
i

21 2

i = 0, 1, 2, . . . .

From this we may nd the state probabilities p(i, a) according to Question 3. Question 5: From (10.51), respectively (10.53), we have: 1 2 2 W = = 1 21 2 = 1.4142 . = w = W 1 1 = D 1 2 = 2 + 1 = 2.4142 , 2 2

= as D = 1 (0) = 2 2.

INDEX Question 6:

565

As mentioned in Section 10.5.4 the waiting time distribution will be exponentially distributed. The weighting time distribution will be given by the following Cox-distribution, which is equivalent to an exponential distribution with intensity (1) (cf. Fig. 2.12) in agreement with the mean value given above.
................ ............... ... . .. ...

.. ................................ ................................ .. . ............................... ............................ ... .. . . .. .. .... .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... . .. ... . .. . . .. .. .. . . ........................................................................ ... .. . . ................................................................... .... . . .. . ... . ... . ..

.... ... .......................................................................

..................................... .................................... ... . . .. .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ... ... ... ... . .. .. .. . . .. .. . .. . . . ............................................................................. ... . .. . ... .. .. ................... .................................. ......................... ......... ....... .. . . .. .. ... .

Updated: 2009-05-06

566 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 10.25

(Exam 2008)

M/E2 /1 queueing system We consider an M/E2 /1 single server queueing system with Poisson arrival process and Erlang-2 distributed service times. There is an innite number of waiting positions. The arrival rate is = 0.4 calls per time unit. The service times are Erlang-2 distributed with same rate = 1 [time units1 ] in both phases. 1. Find the oered trac. 2. Find by using Pollaczek-Khintchines formula the mean waiting time W for all calls. Find also the mean queue length L. 3. Set up a state transition diagram for the above system, where the states of the system are given below. Index a and b specify whether the call being served is in phase a (rst phase) or phase b (second phase), respectively.
     E 1a E 2a E 3a E 4a E 0      u e u e u e u e u e e e e e e e e e e e e e e e e e  e  e e  e      E 2b E 3b E 4b E 1b    

We now assume that arriving calls have dierent priorities (two classes): high priority calls arrive with rate 1 = 0.1 calls per time unit, and low priority calls arrive with rate 2 = 0.3 calls per time unit. Total arrival rate ( = 1 +2 ) and service time distribution are the same as before). 4. Assume non-preemptive priority and nd the mean waiting time for each class. 5. Assume preemptive-resume priority and nd the mean waiting time for each class.

INDEX Technical University of Denmark DTUPhotonics, Networks group

567 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 10.25 Question 1:

(exam 2008)

The oered trac is given by A = s where s is the mean service time. The arrival rate is = 0.4 calls per time unit, and the mean service time is s = 2 time units as each of the two phases is Erlang-2 distributed with mean value one time unit. Thus we get A = 0.4 2 = 0.8 [erlang] Question 2: The form factor of an Erlang-2 distribution is = W =
3 2

(2.56). Using(10.2) we get:

As 2(1 A) 3 0.8 2 , 2(1 0.8) 2

W = 6 [time units] L = W = 0.4 6 L = We may also use (10.3): V where V = m2 . 1A 2 m2 is the second moment of the Erlang-2 distribution (2.52) with parameter = 1 in each phase : k (k + 1) 23 m2 = = 2 = 6. 2 1 W = Thus we get V = 1.2 and W=1.2/2 = 6, the same result as above. 12 = 2.4 5

568 Question 3:
  0.4 E 0      u e u e u e u e u e e e e e e e 1 e 1 e 1 e 1 e 1 1 1 1 1 e e e e e e  e  e e  e      E 2b E 3b E 4b E 1b 0.4  0.4  0.4  0.4    

INDEX

0.4 E 1a

0.4 E 2a

0.4 E 3a

0.4 E 4a

Question 4: Non-preemptive queueing discipline We have to nd the remaining service time at a random point of time (10.59):

V = V1,2 =
i=1 2

Vi = V1 + V2

=
i=1

i m2i 2

The second moment of the Erlang-2 distribution is obtained above as m2 = 6. We have 1 = 0.1 and 2 = 0.3. Thus we get:

= V1 + V2 0.1 6 = 0.3 2 0.3 6 = 0.9 2 6 5

V1 = V2 = V

= 1.2 =

We have A1 = 1 s = 0.2 [erlang] and A1 = 2 s = 0.6 [erlang]. Thus we get from (10.66),

INDEX respectively (10.69): W1


6 V 5 = = 1 A1 1 0.2

569

W1 =

3 [time units] 2 W1 1 (A1 + A2 ) 15 2

W2 = W2 =

As a control we may use the conservation law (10.63) and the result from question 2: A W = A1 W1 + A2 W2 0.8 6 = 0.2 15 3 + 0.6 2 2 q.e.d.

4.8 = 0.3 + 4.5 Question 5: Preemptive resume queueing discipline

Above we have already obtained V1 and V . For the preemptive-resume system we have (10.77): W1 = W1 = 0.3 V1 = 1 A1 1 0.2 3 [time units] 8 V1,2 A1 + s2 {1 A1 }{1 (A1 + A2 )} 1 A1 6/5 0.2 + 2 (1 0.2)(1 0.8) 0.8

W2 =

W2 = 8 [time units] In this case we cannot use the conservation law as control, because for preemptive-resume queueing discipline it is only valid for exponential service time distributions.
Updated: 2009-04-28

570 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 10.26

(Exam 2009)

Priority queueing system We consider a single server queueing system with two types of customers arriving according to Poisson processes. Customers of type one has arrival rate 1 = 0.2 [customer/time unit]. The service time is constant with mean value m1,1 = 1 [time unit]. Customers of type two has arrival rate 2 = 0.2 [customer/time unit]. The service times are hyper-exponentially distributed with two branches (H2 ): 90 % of the customers have mean service time m1,a = 1 [time unit], 10 % of the customers have mean service time m1,b = 21 [time units]. 1. Show that the service time distribution of type two customers has mean value m1,2 = 3 [time units], second moment m2,2 = 90 [time units2 ], and form factor = 10. 2. Find the oered trac for each type of customer, and the total oered trac. 3. Find by using Pollaczek-Khintchines formula the mean waiting time W for all customers. Also nd the mean waiting time w for delayed customers. We now assume that type one customers have higher priority than type two customers. 4. Assume non-preemptive priority and nd the mean waiting time for each type of customers. 5. Assume preemptive-resume priority and nd the mean waiting time for each type of customers.

INDEX Technical University of Denmark DTUPhotonics, Networks group

571 Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 10.26 Question 1:

(exam 2009)

Type two customers have a hyperexponential distributed service time which is dealt with in Sec. 2.3.2.
.. ............... ..... .. .. .. . ...................... .. .. ... .... .... . ... .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .......... ......... ..................... . .. ..................... . .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. . .... .... . .. ... ...................... .. ................ ..... .. .. .. .

m1a = 1 a

9/10

1/10

b m1b = 21

.. ...................... .............. ...... .. .. .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . ... .... .... .. . ... .. ........................... .. .. .. .. .. ........................ ... ... .. . ... ..... .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. ...................... .............. ...... .. .. .. .

m1 = 0.9 1 + 0.1 21 = 3 [time-units] m2 = 0.9 = Question 2: A1 = 1 s1 = 0.2 1 = 0.2 [erlang] A2 = 2 s2 = 0.2 3 = 0.6 [erlang] A = A1 + A2 = 0.2 + 0.6 = 0.8 [erlang] Question 3: We may obtain the mean waiting time for all customers W from (10.2). Then we have to nd mean mean service time and form-factor for all customers. Above we found this for type two customers only. Considering the following questions, it is easier to nd W from (10.3). V in this version of Pollaczek Khintchines formula is for the total trac process and is obtained 2 2 = 90 [time-units2 ] + 0.1 2 1 (1/21)2

m2 90 = 2 = 10 2 m1 3

572 from (10.59). We nd: V1,2 = V1 + V2 = 2 1 m2,1 + m2,2 2 2

INDEX

= 0.1 12 + 90 = 9.1 [time-units] W = V1,2 9.1 = 1A 1 0.8

= 45.5 [time-units] Question 4: Non-preemptive queueing is dealtr with in Sec. 10.6.3. We get by using (10.66) for class one and (10.68) for class two: W1 = 9.1 V1,2 = 1 A1 1 0.2

= 11.375 [time-units] W2 = 11.375 W1 = 1 A1 A2 1 0.2 0.6

= 56.875 [time-units] Question 5: Preemptive-resume queueing discilie is dealt with in Sec. 10.6.6, and we get by using (10.78) for class one and the general formula (10.77) for class two: W1 V1 0.1 12 = = 1 A1 1 0.2 = 0.125 [time-units] W2 = = V1,2 A1 + s2 (1 A1 )(1 A1 A2 ) 1 A1 9.1 0.2 + 3 (1 0.2)(1 0.8) 1 0.2

= 57.625 [time-units]
Updated: 2010-04-21

INDEX Technical University of Denmark DTUPhotonics, Networks group

573 Teletrac Engineering & Network Planning Course 34 340

Exercise 11.1

(Exam 1984)

M/H2 /1 QUEUEING SYSTEM WITH PROCESSOR SHARING Jobs arrive to a computer system according to a Poisson process with intensity . The service time is distributed as a hyper-exponential distribution with two phases denoted by a, respectively b: F (t) = p (1 ea t ) + (1 p) (1 eb t ) .
E

p
E

a
E

1p =q

1. Find the oered trac A. In the following we assume that A < 1. The computer system operates as a single-server system with processor sharing queueing discipline, i.e. if there are x jobs in the system, then a job in phase a is served with rate a /x and a job in phase b is served with rate b /x. The state of the system is dened as (i, j), where i is the number of jobs in phase a, and j is the number of jobs in phase b. The state transition diagram becomes twodimensional with the structure shown in the following gure. 2. Find the missing intensities in connection with the states: (1,1), (1,2), (2,2) and (2,1). 3. Show by considering the above-mentioned four states that the state transition diagram is reversible. An M/M/1 queueing system with the oered trac A (A < 1) has the equilibrium state probabilities: p(i) = p(0) Ai , i = 0, 1, 2, . . .

574

INDEX

4. Show by expressing the state probabilities by state p(0, 0) that the above M/H2 /1 processor sharing system has the same state probabilities as M/M/1 when we let:
i

p(i) =
x=0

p(x, i x) , i = 0, 1, 2, . . . ,

and we only consider i = 1 and 2.

 c p

03 '  1 a T 4 q b
 c p

 c E

13 '  T

02 '  1 a T 3 q b
 c p

 c E

12 '  T

 c E

22 '  T
 c E

01 '  1 a T 2 q b
 c p

 c E

11 '  T
1 2

 c E

21 '  T
1 3

31 '  T
1 4

00 ' 

 c p E

10 ' 

 c p E

20 ' 

 c E

30 ' 

INDEX Technical University of Denmark DTUPhotonics, Networks group

575 Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 11.1 Question 1: The average service time s = m1 is (2.67): s= The oered trac the becomes: A= p (1 p) + a b . (1 p) p + . a b

Question 2: The departure rates are obtained as follows. We rst calculate the service rate

p  c
1 3

03 '  1 (a ) T 4 q (3 b ) p  c
1 2

 c E

13 '  T
1 4

(3 b)
 c E

02 '  1 (a ) T 3 q (2 b ) p  c b

p  c E
1 3

12 ' 22 '   1 (2 a ) T T 4 (2 b) q p  c E
1 3 1 4

(2 b)
 c E

01 '  1 (a ) T 2 q p  c 00 ' 
a

p  c E
1 2

11 ' 21 ' 31 '    1 1 (2 a ) T (3 a) T T 3 4 (b ) q (b ) q p  c E 20 '  1


3

p  c E 10 '  1
2

 c E

1 4

(b )

(2 a )

(3 a)

30 ' 

as if the capacity is innite. This term is put inside brackets on the gure. For state (i, j) this is (ia ) to the left and (jb ) downwards.

576

INDEX

The we reduce the service rate with a factor equal to the number of jobs in the system. For the state (i, j) both of the above service rates are divided by (i+j). Note: The capacity of the server is constant independent of the state. But the service rate (the intensity at which the customers leave the server) depends on the mix of customers. If most customers in the system has a short service time, then service rate will be high, whereas if most customers have a long service time, then the service rate will be low. Question 3: For the four states considered we have: 2 2 q p b a , 4 3 2 2 p q a b . 4 3

Flow clockwise: Flow counter clockwise:

We notice that they are equal. Thus the process is reversible. The same is valid for the other squares, and we may express all states by state p(0, 0) (7.15). Question 4: We nd (q = 1 p): p(0, 1) =

(1 p) p(0, 0) , b p p(0, 0) , a

p(1, 0) =

and thus:

p(1) = p(0, 1) + p(1, 0) = p(0, 0) p(1) = p(0) A p 1p + a b q.e.d. ,

where p(0) = p(0, 0) and A is obtained in Question 1.

INDEX In a similar way we nd: p(0, 2) = (1 p) (1 p) p(0, 0) , b b (1 p) p p(0, 0) , a 1 b 2 p p p(0, 0) . a a

577

p(1, 1) =

p(2, 0) =

p(2) = p(0, 2) + p(1, 1) + p(2, 0) = p(0, 0) p(2) = p(0, 0) p(2) = p(0) A2


Updated: 2008-04-21
2 2

1p b

2p (1 p) + + a b
2

p a

p 1p + a b q.e.d.

578 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 11.2

(Exam 2004)

Palms machine-repair model and generalized processor sharing We consider Palms machine-repair model with four terminals and two servers in parallel. The thinking times are exponentially distributed with mean value 1 = 2 time units, and the service times are exponentially distributed with mean value 1 = 1 time unit. The state of the system is dened in the usual way as the number of terminals being served or waiting. 1. Find the trac oered to the two servers. 2. Construct the state transition diagram, and nd assuming statistical equilibrium the state probabilities p(i), i = 0, 1, . . . , 4 .
. . ....................................................................................................................................................................................................................... . . . ...................................................................................................................................................................................................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... . . . ..... .. ..... . . .... ...... . . . . .. . .. .. . . . . . . . . . . . . . . ....................... . . . ......................... . . . . ................. . . . . .. ................... . . . . . . . . . . . . . . . .. . . . . .. . .. . .. .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . . .. .. . .. . .. ... . . . ... . . . .. . . .. ... . .. . ........... . .. . ........ . . . . . . .. . . .. . . . . .. . . .. . . . . . . . .. . . . .. . . . . . . . .. . . . .. . . . ......... . ......... ............ ............ . . . .. . . .. . ... ... . . ... ... .. .. . . .. .. . . . .. . . . .. .. .. . . . .. .. . . . . . . . .. . . . . . .. . . . . . . ............................ . ................... .......................... . . . . . . .. . . . . . .. . .. . . . .. . . .. . .. . .... . .............. .. . . . ................................ . ....................... .. .... . . .... .. . . . . . . . .. . . .... . . . . .... .. ... .. . . . . . .... .. .. . ............................ ........................... .. . . .... . . .... .. . . ... . .. .. ... ... ... .... . .. .. . . . . . . . . . . ... ... .... . . . . . . . . . . . . ........... ........... . . . . . . . ..... .... ... ........... ........... . . . .... .. . . . . . . . ..... . . . .... . . . . . . . . .... . . . . . . . . . . ... . . . ...... . . . . . . ...... ..... . . . . . . . . . . . . . . ............... ...................... . . . . . . .. . . .. . . . . . . .. . . . ....................................... . . . . . . .... . . . .. . . . . . . . . .. . . . . . . . . . . ... . . ....... . . . . . . . .... . . ....... .. . . . . . . . . .. . . . ... . . . . . . . . . ........... ........... .... .. . . . . . . . . ..... ............ ............ . . .... .. . . . . . . . . . . .. .. .. ... . . . . . .. . . . . . . . .. . . . .. ... ... . ... .. .... .. .. . . . .... . ... . ........................... ........................... .. . .. . . . .... . . . . . . .. ... .. . . .... . . . .. . .... . . .. . . . . . . . . .... . .. . . ......... . ................. .............................. .. .. . .......... . ............................ ................... .. . . . . . . . . . . . . . . . . . . .................... . ................. .. . .. . . . . .. . . . . . . . . .. .. . . .. .. .. . . . . .. . .. .. . . .. . . . ... ... . ... . .. .. .. . .. . ..... ....... ... .......... ..... .... ......... . .. . .. . .. . . . . . . . .. . .. . . . . . . . .. .. . . . . . .. . . .. . ..... ..... ... . . . . . .. .... ...... . .. .. .... . . .. .. .. . .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. .. . . . . . . . .. . . .. . ...................... ... ......................... . ................. . . . ................... . . . . . . .. . . . . . . .. .. .. .. ... ... ............ ...........

Queue

Servers

Terminals 3. Find the average number of terminals which are: a. thinking, b. waiting, c. being served. 4. Find the trac congestion C.

Queueing system

5. Find by applying Littles theorem to the queue and to both servers the response time, that is the average waiting + service time.

INDEX We now assume that the service times are hyper-exponentially distributed as follows: F (t) = 1 9 1 et/7 + 1 e3 t , 10 10 t 0.

579

The state of the system is now dened as (i, j), where i (i = 0, 1, . . . , 4) is the number of jobs being served in phase one, and j (j = 0, 1, . . . , 4) is the number of jobs being served in phase two, 0 i + j 4 . We also assume that the two servers operate in processor sharing mode when more than two terminals are in the queueing system. Thus the service rate in state (i, j) when i + j > 2 is: 2 2 i1 + j2 , i+j i+j where the rst term is the total service rate for the i jobs being served in phase one, and the second term is the total service rate for the j jobs being served in phase two. When two or fewer terminals are being served, each terminal has its own server. 6. Construct the two-dimensional state transition diagram. 7. Consider the state transition diagram: a. show it is reversible, b. has it product form? 8. Show that the aggregated state probabilities p(i + j = x), x = 0, 1, . . . , 4 , are the same as the state probabilities obtained in question 2.

580 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 11.2

(exam 2004)

This is Palms machine-repair model with multiple repairmen which is dealt with in Sec. 9.6.4. Question 1: The oered trac is dened as the trac carried when the capacity is unlimited. This will be the case when we have at least 4 servers. Thus we get the same oered trac as in the Engset case (5.11): A = S=S = S = 4 A = Question 2:
.. ........ .. ........ ... ......... . .... ......... . ..... ......... . ..... .......... . ....... ............ ........ ............ ......... ...... ...... .... ........... ......... .... ........... ..... .... .... ............ .... .. ........... ........ ... . ............. ........ .... . ... . ..... ....... ............ ..... ... ....... ...... ........................... .. .. ........................... ... .. ........ ...... ..... ........... ... ... ... .. ... .. .. .. .. .. .. .. ... . .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . . . .. .. .. .. .. .. .. .. . . .. . .. . .. . .. . ... ... ... . ... ..... ... . ... ... . ..... ... .. ... ... .... ...... ...... .... ...... ...... .... .. . . . . .. . ............. . .... .. .. .. ... ........... ... ............ .... ... ........... . ..... ... . ............ ... ... ........... . ..... .... ............ .... ............. ....... .... ............ ....... .... ........... ........ ...... ...................... ..................... ..................... .................... .................. .................. ................. ...............

1+

/ 1 + / 0.5 1 + 0.5

4 3

4 2 .... ....

3 2 .... ....

2 2 .... ....

1 2 .... ....

The relative state probabilities are obtained by using cut equations:

q(0) = q(1) = q(2) = q(3) = q(4) = Sum =

1 2
3 2 3 4 3 16 87 16

p(0) = p(1) = p(2) = p(3) = p(4) = Sum =

16 87 32 87 24 87 12 87 3 87

= 0.1839 = 0.3678 = 0.2759 = 0.1379 = 0.0345

1 = 1.0000

INDEX

581

We could also apply the theory of queueing networks to this model, but the above approach based on the state transition diagram is indicated in the question. Question 3: a. The average number of terminals thinking becomes:
4

nt =
i=0

(4 i) p(i)

= 4 p(0) + 3 p(1) + 2 p(2) + 1 p(3) + 0 p(4) nt = 220 = 2.5287 . 87


4

b. The average number of terminals waiting becomes: nw =


i=2

(i 2) p(i)

= 0 p(2) + 1 p(3) + 2 p(4) nw = 18 = 0.2069 . 87


4

c. The average number of terminals being served is:


2

ns =
i=0

i p(i) + 2

p(i)
i=3

= 0 p(0) + 1 p(1) + 2 p(2) + 2 p(3) + 2 p(4) , ns = 110 = 1.2644 . 87

As a control the three numbers add to S = 4. Question 4: The oered trac obtained in question one is A= 4/3, and the carried trac is ns = 110/87. The trac congestion then becomes: AY C = = A C =
4 3

110 87

4 3

6 = 0.0517 . 116

582
#

INDEX
04

9 20

"! T # c
2 4

12

1 20

18 20

' "! 2 1 T 4 7 # c
2 3

03

# E

2 20

27 20

' "! 2 1 T 3 7

02

# c E

9 20

"! T
2 4

13

1 20

# c

3 20

36 20

' "! 1 T 7

01

# c E

18 20

' "! 2 2 T 4 7
2 3

12

# E

2 20

# c

4 20

' "! 1
7

00

# c E

27 20

' "! 2 2 T 3 7

11

# c E

9 20

"! T
2 4

22

1 20

3 20

' "! 2
7

10

# c E

18 20

' "! 2 3 T 4 7
2 3

21

# E

2 20

' "! 2 3
3

20

# c E

9 20

"! T
2 4

31

1 20

' "! 2 4
4

30

# E

"!

40

Question 5: We nd the following relation (the mean service time is one): 1 W 1 = = nw ns = R 1 W +1 = = , nw + ns nw + ns ns nw , ns

R = 1+ R = We thus have: 64 . 55

thinking time: waiting time: service time: circulation time:

= 2 =
9 55

= 1 =
174 55

INDEX

583

The number of terminals in each stage (thinking, waiting, service) is proportional to the time spent in each stage. This is Littles theorem when we have the same ow in each stage. This is in agreement with question 3. Question 6: We let state (i, j) correspond to i terminals being served in the phase with mean value 7 and j terminals being served in the phase with mean value 1/3. The state transition diagram is shown in the gure above. For example, in state (0, 0) the total arrival rate is 4/2. With probability 1/10 this is a terminal with mean holding time 7, and with probability 9/10 it is a terminal with mean holding time 1/3. Question 7: We show that the ow clockwise is equal to the ow counter-clockwise in each square. This also conrms that the state transition program is correct. The state transition diagram does not have the product-form property. Question 8: As the state transition diagram is reversible we may express all state probabilities by state p(0, 0). We nd the following relative state probabilities when we let q(0, 0) = 160 000 (for convenience to get integer values which are easy to work with): 243 3 240 21 600 2 268 22 680 7 938 52 920 12 348

96 000 100 800

160 000 224 000 117 600 41 160 7 203 We nd: q(0) = q(0, 0) = 160 000 q(1) = q(1, 0) + q(0, 1) = 320 000 q(2) = q(2, 0) + q(1, 1) + q(0, 2) = 240 000 q(3) = q(3, 0) + q(2, 1) + q(1, 2) + q(0, 3) = 120 000 q(4) = q(4, 0) + q(3, 1) + q(2, 2) + q(1, 3) + q(0, 4) = 30 000

584

INDEX

The sum of the the relative state probabilities is 870.000 After normalization we get: p(0) = p(1) = p(2) = p(3) = p(4) = 16 87 32 87 24 87 12 87 3 87

We see the aggregated state probabilities for hyper-exponential service times relatively are the same as for exponential service times in question 2. This indicates that the machinerepair model with processor sharing is insensitive to the service time distribution. This is analogous to that the two systems M/M/1 and M/G/1PS also have the same state probabilities (Sec. 10.7).
Updated: 2009-05-05

INDEX Technical University of Denmark DTUPhotonics, Networks group

585 Teletrac Engineering & Network Planning Course 34 340

Exercise 12.6

(exam 2001)

MACHINEREPAIR MODEL AS A CYCLIC QUEUEING NETWORK We consider a machinerepair model with 4 customers (sources, terminals). The customers have exponentially distributed thinking times with intensity 1 = 0.5 customers per time unit (node 1). The customers are served by two successive single servers. A customer is rst served at node 2 which is a single server with exponentially distributed service times with mean value 1 = 1 time units. Then a customer continues to node 3 which is a single server 2 with exponentially distributed service times with mean value 1 = 1/2 time units. After 3 nishing service at node 3 the customers return to node 1 and starts a new thinking time. This system is a single chain cyclic queueing network with 3 nodes and 4 identical sources. Node 1 (the terminals) corresponds to an M/M/ queueing system whereas both node 2 and node 3 are M/M/1 singleserver systems. The customers circulate between the nodes in the cyclic sequence 1, 2, 3, 1, 2, . . .. 1. Let the relative load of node 3 be equal to one and nd the relative loads of node 1 and node 2. 2. Find the relative state probabilities of each node considered in isolation. 3. Apply the convolution algorithm to nd the absolute state probabilities of each node. 4. Find the average number of customers in each node. 5. Find the average sojourn (waiting + service) time for a customer in each node, and the average total cycle time. 6. Find the mean queue length observed by a customer at the three nodes if we increase the number of customers from 4 to 5.

586 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 12.6 Question 1:

(exam 2001)

As all three nodes serve the same number of customers ( per time unit), the relative loads are proportional with the mean service times. The relative load is the relative oered trac, which in queueing systems is equal to the carried trac. The oered trac is A = s where s is the mean holding time, but we do not know . We get: 3 = 1 , Question 2: The rst node is of type M/M/ and has the relative state probabilities (12.3): p(i) = p(0)
i 1 , i!

2 = 2 ,

1 = 4

i = 0, 1, . . . .

The second and the third node are both of type M/M/1 and have the relative state probabilities (12.4): p(i) = p(0) i , i = 0, 1, . . . . Thus we get the following relative state probabilities: i 0 1 2 3 4 q1 (i) 1 4 8
32 3 32 3

q2 (i) 1 2 4 8 16

q3 (i) 1 1 1 1 1

In the following we multiply the relative state probabilities q1 (i) of node 1 by 3 to work with integers. Question 3: We now apply the convolution algorithm to nd the absolute state probabilities of each node. The node we want to consider should be the last one convolved, i.e. we rst aggregate all other nodes into one node by convolution. We nd:

INDEX Node 1: i 0 1 2 3 4 q2 (i) 1 2 4 8 16 q3 (i) 1 1 1 1 1 q23 (i) = q2 q3 1 3 7 15 31 q1 (i) 3 12 24 32 32 q123 (i) = q23 q1 3 21 81 233 569

587

The term of interest is q123 (4) = 569, which is made up of the following contributions:

q123 (4) = q1 (0) q23 (4) + q1 (1) q23 (3) + q1 (2) q23 (2) + q1 (3) q23 (1) + q1 (4) q23 (0) = = 3 31 569 . + + 12 15 + 24 7 + 32 3 + 32 1

From this we get the state probabilities of node 1: p1 (0) = p1 (1) = p1 (2) = p1 (3) = p1 (4) = 93 = 0.1634 , 569 180 = 0.3163 , 569 168 = 0.2953 , 569 96 = 0.1687 , 569 32 = 0.0562 . 569

In a similar way we nd the state probabilities of node 2 and node 3.

588 Node 2: i 0 1 2 3 4 q1 (i) 3 12 24 32 32 q3 (i) 1 1 1 1 1 q13 (i) = q1 q3 3 15 39 71 103 q2 (i) 1 2 4 8 16 q123 (i) = q13 q2 3 21 81 233 569

INDEX

The term of interest is q123 (4) = 569, which is made up of the following elements:

q123 (4) = q2 (0) q13 (4) + q2 (1) q13 (3) + q2 (2) q13 (2) + q2 (3) q13 (1) + q2 (4) q13 (0) = = 1 103 569 . + + 2 71 + 4 39 + 8 15 + 16 3

From this we get the state probabilities of node 2: p2 (0) = p2 (1) = p2 (2) = p2 (3) = p2 (4) = 103 = 0.1810 , 569 142 = 0.2496 , 569 156 = 0.2742 , 569 120 = 0.2109 , 569 48 = 0.0844 . 569

INDEX Node 3: i 0 1 2 3 4 q1 (i) 3 12 24 32 32 q2 (i) 1 2 4 8 16 q12 (i) = q1 q2 3 18 60 152 336 q3 (i) 1 1 1 1 1 q123 (i) = q12 q3 3 21 81 233 569

589

The term of interest is q123 (4) = 569, which is made up of the following elements:

q123 (4) = q3 (0) q12 (4) + q3 (1) q12 (3) + q3 (2) q12 (2) + q3 (3) q12 (1) + q3 (4) q12 (0) = = 1 336 569 . + + 1 152 + 1 60 + 1 18 + 13

From this we get the state probabilities of node 3: p3 (0) = p3 (1) = p3 (2) = p3 (3) = p3 (4) = 336 = 0.5905 , 569 152 = 0.2671 , 569 60 = 0.1054 , 569 18 = 0.0316 , 569 3 = 0.0053 . 569

590 Question 4:

INDEX

The average number of customers in each node is obtained from the state probabilities:
4

Lj =
i=0

i pj (i) ,

j = 1, 2, 3 .

We nd: L1 = L2 = L3 = 932 = 1.6380 , 569 1006 = 1.7680 , 569 338 = 0.5940 . 569

As a control we have the total number of customers: L1 + L2 + L3 = 4 .

Question 5: The utilisation of the nodes is obtained from the state probabilities. The rst node is an innite-server with carried trac
4

Y1 =
i=1

i p1 (i)

and the two other nodes are single-server systems, which have the carried trac Yi = 1 pi (0) , i = 2, 3 .

For a single-server system the carried trac equals the utilisation. The carried trac becomes (we know the ratio between the carried trac in the nodes from Question 1):

Node 3: Node 2: Node 1:

Y3 = 1 Y2 Y1

336 = 0.4095 [erlang] , 569 103 = 0.8190 [erlang] , = 1 569 932 = = 1.6380 [erlang] . 569

INDEX The circulation rate is therefore: = 1 1.6380 = 1 0.8190 = 2 0.4095 = 0.8190 [customers/time unit] . 2

591

where we as a control have that all three values are equal. The sojourn (waiting + service) time in each node becomes by Littles law using Li from Question 4: W1 = L1 = 2 [time units], 1006 L2 = = 2.1587 [time units], 569 0.8190 L3 338 = = 0.7253 [time units]. 569 0.8190 R = 2 + 2.1587 + 0.7253 = 4.8840 [time units]. As a control we have: R= Question 6: We use the arrival theorem which says that the fth customer observes the system as if he does not belong to the system himself. So the fth customer will see the mean values calculated in question 4 for a system with 4 customers.
Updated: 2009-05-05

W2 =

W3 =

The total circulation time becomes:

4 = 4.8840 [time units]. 0.8190

592 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 12.7

(Exam 2002)

ENGSETs MODEL AS A QUEUEING NETWORK We consider Engsets loss system with S = 6 sources and n = 3 channels. The arrival rate of an idle source is = 2 calls per time unit, and the mean service time is 1 = 1 (chosen as time unit). 1. Find the oered trac. 2. Calculate the time congestion E using a formula recursive in n. Each step in the recursion should be visible. We consider a closed queueing network with K = 2 nodes and S = 6 customers. Node one is an innite server (IS) and the service rate is 1 per server. Node two is an M/M/3loss system with service rate 2 per server, which corresponds to an innite server (IS) truncated at state 3. The routing is cyclic so that a customer served in node one goes to node two, and a customer served in node two goes to node one. A customer blocked in node two returns to (i.e. stays in) node one. Assume the circulation rate is c , and let 1 = c /1 and 2 = c /2 . This is a queueing network with blocking, and it has product-form. 3. Find the relative state probabilities of each node as independent systems. 4. Convolve the two nodes into one under the assumption that the total number of customers is 6, and show that the state probabilities p(i), (i = 0, 1, 2, 3) of node two correspond to an Engset loss system with S = 6 sources, n = 3 channels, and = 1 /2 .

INDEX Danmarks Tekniske Universitet DTUFotonik, Netvrksgruppen

593 Data & Teletrakteori kursus 34 340

Solution to Exercise 12.7: Question 1:

(Exam 2002)

As given in the textbook (5.10) & (5.11) we have: = = 2, = 2 = , 1+ 3 , 1+ 2 = 4 [erlang] . 3

A = S A = 6 Question 2:

Using the recursion formula for E (5.52) we get: Ex,S () = (S x + 1) Ex1,S () , x + (S x + 1) Ex1,S () E0,S () = 1 .

E0,6 (2) = 1 , 12 1 12 = , 1 + 12 1 13

E1,6 (2) =

10 12 60 13 E2,6 (2) = , 12 = 73 2 + 10 13 E3,6 (2) =


60 8 73 160 = 0.6867 . 60 = 233 3 + 8 73

We may also use the corresponding recursive formula for the inverse blocking probability 1 Ix,S () = (5.53) given in the textbook: Ec,S () x Ix,S () = 1 + Ix1,S () , I0,S () = 1 . (S x + 1)

594 Question 3: According to the description in the exercise we get the following state probabilities:

INDEX

State 0 1 2 3 4 5 6

Node 1 1 1
2 1 2! 3 1 3! 4 1 4! 5 1 5! 6 1 6!

Node 2 1 2
2 2 2! 3 2 3!

0 0 0

Question 4: By convolution we get for a network with 6 customers the following contributions: C =
6 1 5 4 2 3 3 1 + 1 2 + 1 2 + 1 2 6! 5! 4! 2! 3! 3!

6 1 = 6!

6 2 65 1+ + 1 1 12

2 1

654 + 123

2 1

= = where

6 1 {q(0) + q(1) + q(2) + q(3)} 6! 6 1 Q, 6!

2 c = 1 2

c 1 = 1 2

and Q = q(0) + q(1) + q(2) + q(3) .

Term 1 corresponds to 6 customers in the rst node and 0 customers in the second node. Term 2 corresponds to 5 customers in the rst node and 1 customers in the second node.

INDEX

595

Term 3 corresponds to 4 customers in the rst node and 2 customers in the second node. Term 4 corresponds to 3 customers in the rst node and 3 customers in the second node. We can at most have 3 customers in the second node. We notice that the normalised state probabilities correspond to an Engset system (5.27) with S = 6 customers and n = 3 channels: p(i) = q(i) , Q 6 p(i) = i
3

1 2 6 j 1 2

j=0

The oered trac per idle source, usually called , is: = 1 . 2

Set up the state transition diagram for this system!


Updated: 2004-05-04

596 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 12.8

(Exam 2006)

Closed and mixed queueing networks We consider a closed queueing network with one chain of customers and two nodes. Node one is of type M/M/1, and node two is of type M/M/2. After nishing service in a node a customers passes on to the other node. There is a total of 4 customers in the queueing network. The mean service times is s1 = 1 [time unit] in node one, and s2 = 2 [time units] in node 2. We assume the system is in statistical equilibrium. The state of the system is dened as the number of customers in node one. 1. Construct the one-dimensional state transition diagram of the system. 2. Find the state probabilities of the system and the average number of customers in each node. 3. Find from the above state probabilities the intensity by which the customers circulate in the system. 4. Find the mean sojourn times (sojourn time = waiting time + service time) in the two nodes (apply Littles law), and nd the mean cycle time of a customer. 5. Apply the convolution algorithm to nd the state probabilities of the system and calculate the average number of customers in the two nodes from the convolution terms. We add an open chain which loads node one with a0 erlang (0 a0 < 1). We denote the unknown load from the closed chain in node one by a1 . The state of node one is now given by p(i, j), where i is the number of customers from the closed chain and j is the number of customers from the open chain. 6. Write down the two-dimensional state probabilities of node one expressed by p(0, 0): p(i, j), 0 i 4, 0 j < .

7. Show by nding the marginal distribution: p(i, ) =


j=0

p(i, j) ,

INDEX and using the identity:

597

j=0

1 (i + j)! j a = , i! j! (1 a)i+1

that we can eliminate the open chain by reducing the service rate of node one by a factor (1 a0 ).

598 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to Exercise 12-8

Exam 2006

Question 1: The state transition diagram becomes as follows. Observe that the arrival process to node one is the departure process from node 2, and that both servers of node 2 work in states {0, 1, 2}, only one server works in state {3}, and no server of node two works in state {4}.
2  j  j  j  j 

    

Question 2: The relative state probabilities q(i) = p(i)/p(0), respectively the absolute state probabilities p(i), for node one become as follows: q(0) = 1 q(1) = 1 q(2) = 1 q(3) = 1 q(4) =
1 2

p(0) = p(1) = p(2) = p(3) = p(4) =

2 9 2 9 2 9 2 9 1 9

The average number of customers in node one, respectively node two, is:
4

L1 =
i=0 4

i p(i) =

16 , 9 20 , 9

L2 =
i=0

(4 i) p(i) =

= L1 + L2 = 4

INDEX Question 3:

599

Node one is working with rate = 1 except in state zero. So the average number of customers owing through node one per time unit becomes: 1 = 1 {1 p(0)} = If we consider node two we get the same result: 2 = 0 p(4) + = We of course have: = 1 = 2 (This will also be fullled even if the state probabilities are erroneous). Question 4: The mean sojourn times Wi in the two nodes become (time units): W1 = W2 = The circulation time becomes: R = W1 + W2 = 36 4 4 = = , 7 7/9 L1 16 = , 7 L2 20 = . 7 7 . 9 1 p(3) + 1 {p(2) + p(1) + p(0)} 2 7 . 9

The term 4/ is obtained by aplying Littless law to the total system. The service times in the nodes are given so the average waiting time becomes 9/7 in node one and 6/7 in node two. Question 5: If we let the relative load of node one 1 = 1, then the relative load of node two becomes 2 = 2. Node one is a single server node, whereas node two has two servers. The relative state probabilities become as follows:

600 State Node 1 Node 2 0 1 2 3 4 1 1 1 1 1 1 2 2 2 2

INDEX

Only the convolution term with 4 customers is of interest: q12 (4) = 1 2 + 1 2 + 1 2 + 1 2 + 1 1 We notice that this is similar to the relative state probabilities in Question 2, and thus we get the same result for average number of customers in the two nodes. Question 6: The two-dimensional state probabilities become (11.22): ai aj p(i, j) = 1 0 (i + j)! p(0, 0) , i! j ! Question 7: Summing over all values of j we get: p(i, ) =
j=0 j=0

0 i 4,

0 j < .

p(i, j)

ai aj 1 0 (i + j)! p(0, 0) i! j !
j=0

ai 1

p(0, 0)

(i + j)! j a i! j! 0

ai p(0, 0) 1 (1 a0 )i+1 a1 1 a0
i

p(0, 0) . 1 a0

INDEX

601

We notice that state zero has changed its relative load as compared with other nodes in the network from 1 to 1/(1 a0 ) and that state probability p(i) in this single server M/M/1 queueing system is equal to state probabilities for a single server queueing system with oered trac a1 /(1 a0 ), i.e. the service rate is reduced by a factor (1 a0 ). The state probabilities should add to one as for M/M/1. Thus we nd p(0, 0) as follows: a1 p(0, 0) = 1 . 1 a0 1 a0 p(0, 0) = (1 a0 ) a1 .
Updated 2008-05-04

602 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Exercise 12.9

(Exam 2009)

Queueing network with three nodes We consider an open queueing network with three nodes as shown in the gure.

1 = 2

     1 E n1 = 3 p2,3 = 2 1= 1  node 1 c  1 p1,3 = 2 n3 = 1 p3, q

p1,2 =

n2 = 1 I  2 = 2 1   p2, 2  

node 2 

1 2

node 3


3 = 2

=1

Node one is an M/M/3 queueing system with mean service time s1 = 1 [time unit]. Calls arrive from outside to node one according to a Poisson process with rate 1 = 2 [customers/time unit]. From node one the routing probability is p1,2 = 1/2 to node two, and p1,3 = 1/2 to node three. Node two is an M/M/1 queueing system with mean service time s2 = 1/2 [time units]. From node two the routing probability is p2,3 = 1/2 to node three, and with probability p2, = 1/2 a customer leave the network. Node three is an M/M/1 queueing system with mean service time s3 = 1/2 [time units]. From node three customers leave the network (p3, = 1).

1. Find the trac oered to each node. 2. Find the state probabilities pi (j) for state j = 0, 1, 2, 3, 4 for each node (i = 1, 2, 3), and the state probability p(x1 , x2 , x3 ) = p(1, 1, 1) for the whole queueing network. 3. Find the mean waiting time for all customers in each node.

INDEX

603

We now close the network by xing the total number of customers to 4 customers. Thus we only look at states with a total number of 4 customers. (Customers which leave the network in node 2 and 3 immediately go to node one). 4. Find by convolving the above state probabilities the state probabilities of node three. 5. Find the carried trac in node three, and then the carried trac in the other two nodes.

604 Technical University of Denmark DTUPhotonics, Networks group

INDEX Teletrac Engineering & Network Planning Course 34 340

Solution to exercise 12.9 Question 1:

(exam 2009)

The solution to the ow balance equations (12.5) is easy to obtain. The arrival rates to the three nodes become: 1 = 2 [customers per time unit] 2 = 3 = 1 1 = 1 [customers per time unit] 2 1 1 1 3 1 + 2 = 1 + = [customers per time unit] 2 2 2 2

Then the oered trac to the three nodes become: A1 = 1 s1 = 2 1 = 2 [erlang] A2 = 2 s2 = 1 = 1 [erlang] 2 3 1 2 2 1 2

A3 = 3 s3 = =

3 [erlang] 4

We notice that for all three nodes Ai < ni so that the conditions for statistical equilibrium are fullled.

Question 2: All three nodes are M/M/n queueing systems with state probabilities given by (9.2) where p(0) is given by (9.4). This is used for Node 1 with n = 3. Node 2 and 3 are M/M/1 single

INDEX server systems which have the simple state probabilities given by (9.30).

605

State 0 1 2 3 4

Node 1 p1 (0) = p1 (1) = p1 (2) = p1 (3) = p1 (4) =


1 9 2 9 2 9 4 27 8 81

Node 2 p2 (0) = p2 (1) = p2 (2) = p2 (3) = p2 (4) =


1 2 1 4 1 8 1 16 1 32

Node 3 p3 (0) = p3 (1) = p3 (2) = p3 (3) = p3 (4) =


1 4 3 16 9 64 27 256 81 1024

Due to product form property for this Jackson network we have:

p(1, 1, 1) = p1 (1) p2 (1) p3 (1) = = 2 1 3 9 4 16 1 96

Question 3: From the Erlang-C model we have for a system with n servers the mean waiting time (9.15) s . nA

W = E2,n (A) For single server systems this simplies to:

W =

As V = , 1A 1A

where (10.4): V = 2 m2 = 2 = A s . 2 2

606 For the three nodes we get the following mean waiting times Wi , i = 1, 2, 3: W1 = = W2 = = W3 = = Question 4: 4 1 9 32 4 9 1 1/2 2 1 1/2 1 2 3 1/2 4 1 3/4 3 2

INDEX

To get the state probabilities of node three we choose the order of convolution ((1 2) 3). State 0 1 2 3 4 Question 5: 144 1 + 360 2 + 468 4 + 422 8 + 341 16 11568 = 2592 32 2592 32 The last term in the numerator corresponds to that node three is idle: q123 (4) = p(Node 3 idle) = 341 16 341 = 11568 723 341 382 = 723 723 We have Node 1
1 9 2 9 2 9 4 27 8 81

Node 2
1 2 1 4 1 8 1 16 1 32

Node 1 2
1 18 5 36 13 72 71 432 341 2592

Node 3
1 2 1 4 1 8 1 16 1 32

p123 (4)

Y3 = 1 p(Node 3 idle) = 1 = 1146 = 0.5284 [erlang] 2169

INDEX As we know the relative load of the nodes we then get: Y1 = = Y2 = =


Updated: 2009-06-23

607

8 Y3 3 3056 = 1.4089 [erlang] 2169 2 Y3 3 764 = 0.3522 [erlang] 2169

608

INDEX

INDEX

609

610 Number of Servers n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

INDEX

1
0.2000 0.3333 0.4286 0.5000 0.5556 0.6000 0.6364 0.6667 0.6923 0.7143 0.7333 0.7500 0.7647 0.7778 0.7895 0.8000 0.8095 0.8182 0.8261 0.8333 0.8400 0.8462 0.8519 0.8571 0.8621 0.8667 0.8710 0.8750 0.8788 0.8824 0.8857 0.8889 0.8919 0.8947 0.8974 0.9000 0.9024 0.9048 0.9070 0.9091

2
0.0244 0.0769 0.1385 0.2000 0.2577 0.3103 0.3577 0.4000 0.4378 0.4717 0.5021 0.5294 0.5541 0.5765 0.5968 0.6154 0.6324 0.6480 0.6624 0.6757 0.6880 0.6994 0.7101 0.7200 0.7293 0.7380 0.7462 0.7538 0.7611 0.7679 0.7744 0.7805 0.7863 0.7918 0.7970 0.8020 0.8067 0.8112 0.8155 0.8197

3
0.0020 0.0127 0.0335 0.0625 0.0970 0.1343 0.1726 0.2105 0.2472 0.2822 0.3152 0.3462 0.3751 0.4021 0.4273 0.4507 0.4725 0.4929 0.5119 0.5297 0.5463 0.5618 0.5764 0.5902 0.6031 0.6152 0.6267 0.6375 0.6478 0.6575 0.6667 0.6755 0.6838 0.6917 0.6992 0.7064 0.7133 0.7198 0.7261 0.7321

4
0.0001 0.0016 0.0062 0.0154 0.0294 0.0480 0.0702 0.0952 0.1221 0.1499 0.1781 0.2061 0.2336 0.2603 0.2860 0.3107 0.3343 0.3567 0.3781 0.3983 0.4176 0.4358 0.4531 0.4696 0.4851 0.4999 0.5140 0.5273 0.5400 0.5521 0.5637 0.5746 0.5851 0.5951 0.6047 0.6138 0.6226 0.6309 0.6390 0.6467

5
0.0000 0.0002 0.0009 0.0031 0.0073 0.0142 0.0240 0.0367 0.0521 0.0697 0.0892 0.1101 0.1318 0.1541 0.1766 0.1991 0.2213 0.2430 0.2643 0.2849 0.3048 0.3241 0.3426 0.3604 0.3775 0.3939 0.4096 0.4247 0.4392 0.4530 0.4663 0.4790 0.4912 0.5029 0.5141 0.5249 0.5353 0.5452 0.5548 0.5640

6
0.0000 0.0000 0.0001 0.0005 0.0015 0.0035 0.0069 0.0121 0.0192 0.0282 0.0393 0.0522 0.0666 0.0825 0.0994 0.1172 0.1355 0.1542 0.1730 0.1918 0.2106 0.2290 0.2472 0.2649 0.2822 0.2991 0.3155 0.3313 0.3467 0.3615 0.3759 0.3898 0.4031 0.4160 0.4285 0.4405 0.4521 0.4633 0.4741 0.4845

7
0.0000 0.0000 0.0000 0.0001 0.0003 0.0008 0.0017 0.0034 0.0061 0.0100 0.0152 0.0219 0.0300 0.0396 0.0506 0.0627 0.0760 0.0902 0.1051 0.1205 0.1364 0.1525 0.1688 0.1851 0.2013 0.2174 0.2332 0.2489 0.2642 0.2792 0.2939 0.3082 0.3221 0.3356 0.3488 0.3616 0.3740 0.3860 0.3977 0.4090

8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0004 0.0009 0.0017 0.0031 0.0052 0.0081 0.0120 0.0170 0.0232 0.0304 0.0388 0.0483 0.0587 0.0700 0.0821 0.0949 0.1082 0.1219 0.1359 0.1501 0.1644 0.1788 0.1932 0.2075 0.2216 0.2356 0.2493 0.2629 0.2761 0.2892 0.3019 0.3143 0.3265 0.3383

9
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0009 0.0016 0.0027 0.0043 0.0066 0.0096 0.0133 0.0180 0.0236 0.0301 0.0375 0.0457 0.0548 0.0646 0.0751 0.0862 0.0978 0.1098 0.1221 0.1347 0.1474 0.1602 0.1731 0.1860 0.1989 0.2117 0.2243 0.2368 0.2491 0.2613 0.2732

10
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0008 0.0014 0.0023 0.0036 0.0053 0.0076 0.0105 0.0141 0.0184 0.0234 0.0293 0.0358 0.0431 0.0511 0.0598 0.0690 0.0787 0.0889 0.0995 0.1105 0.1217 0.1331 0.1446 0.1563 0.1680 0.1797 0.1914 0.2030 0.2146

Erlang-B formula E1,n (A)

INDEX Number of Servers n A


10.25 10.50 10.75 11.00 11.25 11.50 11.75 12.00 12.25 12.50 12.75 13.00 13.25 13.50 13.75 14.00 14.25 14.50 14.75 15.00 15.25 15.50 15.75 16.00 16.25 16.50 16.75 17.00 17.25 17.50 17.75 18.00 18.25 18.50 18.75 19.00 19.25 19.50 19.75 20.00

611

1
0.9111 0.9130 0.9149 0.9167 0.9184 0.9200 0.9216 0.9231 0.9245 0.9259 0.9273 0.9286 0.9298 0.9310 0.9322 0.9333 0.9344 0.9355 0.9365 0.9375 0.9385 0.9394 0.9403 0.9412 0.9420 0.9429 0.9437 0.9444 0.9452 0.9459 0.9467 0.9474 0.9481 0.9487 0.9494 0.9500 0.9506 0.9512 0.9518 0.9524

2
0.8236 0.8274 0.8310 0.8345 0.8378 0.8410 0.8441 0.8471 0.8499 0.8527 0.8553 0.8579 0.8603 0.8627 0.8650 0.8673 0.8694 0.8715 0.8735 0.8755 0.8774 0.8792 0.8810 0.8828 0.8844 0.8861 0.8877 0.8892 0.8907 0.8922 0.8936 0.8950 0.8964 0.8977 0.8990 0.9002 0.9015 0.9027 0.9038 0.9050

3
0.7378 0.7433 0.7486 0.7537 0.7586 0.7633 0.7678 0.7721 0.7763 0.7804 0.7843 0.7880 0.7917 0.7952 0.7986 0.8019 0.8051 0.8081 0.8111 0.8140 0.8169 0.8196 0.8222 0.8248 0.8273 0.8297 0.8321 0.8344 0.8366 0.8388 0.8410 0.8430 0.8450 0.8470 0.8489 0.8508 0.8526 0.8544 0.8561 0.8578

4
0.6541 0.6612 0.6680 0.6745 0.6809 0.6869 0.6928 0.6985 0.7039 0.7092 0.7143 0.7192 0.7239 0.7285 0.7330 0.7373 0.7415 0.7455 0.7494 0.7532 0.7569 0.7605 0.7640 0.7674 0.7707 0.7739 0.7770 0.7800 0.7830 0.7859 0.7887 0.7914 0.7940 0.7966 0.7992 0.8016 0.8040 0.8064 0.8087 0.8109

5
0.5728 0.5813 0.5895 0.5974 0.6050 0.6124 0.6195 0.6264 0.6330 0.6394 0.6456 0.6516 0.6574 0.6630 0.6684 0.6737 0.6788 0.6837 0.6886 0.6932 0.6978 0.7022 0.7065 0.7106 0.7147 0.7186 0.7225 0.7262 0.7298 0.7334 0.7368 0.7402 0.7435 0.7467 0.7498 0.7529 0.7558 0.7587 0.7616 0.7644

6
0.4946 0.5043 0.5137 0.5227 0.5315 0.5400 0.5482 0.5561 0.5638 0.5712 0.5784 0.5854 0.5921 0.5987 0.6050 0.6112 0.6172 0.6230 0.6286 0.6341 0.6394 0.6446 0.6497 0.6546 0.6594 0.6640 0.6685 0.6729 0.6772 0.6814 0.6855 0.6895 0.6934 0.6972 0.7009 0.7045 0.7080 0.7115 0.7148 0.7181

7
0.4200 0.4307 0.4410 0.4510 0.4607 0.4701 0.4792 0.4880 0.4966 0.5049 0.5130 0.5209 0.5285 0.5359 0.5431 0.5500 0.5568 0.5634 0.5698 0.5761 0.5821 0.5880 0.5938 0.5994 0.6048 0.6102 0.6153 0.6204 0.6253 0.6301 0.6348 0.6394 0.6438 0.6482 0.6525 0.6566 0.6607 0.6647 0.6685 0.6723

8
0.3499 0.3611 0.3721 0.3828 0.3931 0.4033 0.4131 0.4227 0.4320 0.4410 0.4498 0.4584 0.4667 0.4749 0.4828 0.4905 0.4979 0.5052 0.5123 0.5193 0.5260 0.5326 0.5390 0.5452 0.5513 0.5572 0.5630 0.5687 0.5742 0.5795 0.5848 0.5899 0.5949 0.5998 0.6046 0.6093 0.6139 0.6183 0.6227 0.6270

9
0.2849 0.2964 0.3077 0.3187 0.3295 0.3400 0.3504 0.3604 0.3703 0.3799 0.3892 0.3984 0.4073 0.4160 0.4245 0.4328 0.4408 0.4487 0.4564 0.4639 0.4713 0.4784 0.4854 0.4922 0.4988 0.5053 0.5117 0.5179 0.5239 0.5298 0.5356 0.5413 0.5468 0.5522 0.5574 0.5626 0.5677 0.5726 0.5774 0.5822

10
0.2260 0.2374 0.2486 0.2596 0.2704 0.2811 0.2916 0.3019 0.3120 0.3220 0.3317 0.3412 0.3505 0.3596 0.3686 0.3773 0.3858 0.3942 0.4024 0.4103 0.4182 0.4258 0.4333 0.4406 0.4477 0.4547 0.4615 0.4682 0.4747 0.4811 0.4874 0.4935 0.4995 0.5053 0.5111 0.5167 0.5222 0.5275 0.5328 0.5380

Erlangs Bformula E1,n (A)

612 Number of Servers n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

INDEX

11
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0007 0.0012 0.0019 0.0029 0.0043 0.0060 0.0083 0.0111 0.0144 0.0184 0.0230 0.0282 0.0341 0.0406 0.0477 0.0554 0.0636 0.0722 0.0813 0.0907 0.1005 0.1106 0.1208 0.1313 0.1418 0.1525 0.1632

12
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0004 0.0006 0.0010 0.0016 0.0024 0.0034 0.0048 0.0066 0.0087 0.0114 0.0145 0.0181 0.0223 0.0271 0.0324 0.0382 0.0446 0.0514 0.0587 0.0665 0.0746 0.0831 0.0919 0.1010 0.1103 0.1197

13
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0006 0.0009 0.0013 0.0019 0.0028 0.0038 0.0052 0.0069 0.0090 0.0115 0.0144 0.0177 0.0216 0.0259 0.0307 0.0359 0.0416 0.0478 0.0544 0.0614 0.0687 0.0764 0.0843

14
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0005 0.0007 0.0011 0.0016 0.0022 0.0031 0.0042 0.0055 0.0071 0.0091 0.0114 0.0141 0.0172 0.0207 0.0247 0.0290 0.0338 0.0390 0.0445 0.0505 0.0568

15
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0004 0.0006 0.0009 0.0013 0.0018 0.0025 0.0033 0.0044 0.0057 0.0072 0.0091 0.0113 0.0138 0.0166 0.0199 0.0235 0.0274 0.0318 0.0365

16
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0005 0.0007 0.0010 0.0014 0.0020 0.0027 0.0035 0.0045 0.0058 0.0073 0.0090 0.0111 0.0134 0.0160 0.0190 0.0223

17
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0004 0.0006 0.0008 0.0012 0.0016 0.0021 0.0028 0.0036 0.0046 0.0058 0.0072 0.0089 0.0108 0.0129

18
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0002 0.0003 0.0005 0.0007 0.0009 0.0013 0.0017 0.0022 0.0029 0.0037 0.0047 0.0058 0.0071

19
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0001 0.0002 0.0003 0.0004 0.0006 0.0008 0.0010 0.0014 0.0018 0.0023 0.0030 0.0037

20
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0002 0.0003 0.0005 0.0006 0.0008 0.0011 0.0014 0.0019

Erlangs Bformula E1,n (A)

INDEX Number of Servers n A


10.25 10.50 10.75 11.00 11.25 11.50 11.75 12.00 12.25 12.50 12.75 13.00 13.25 13.50 13.75 14.00 14.25 14.50 14.75 15.00 15.25 15.50 15.75 16.00 16.25 16.50 16.75 17.00 17.25 17.50 17.75 18.00 18.25 18.50 18.75 19.00 19.25 19.50 19.75 20.00

613

11
0.1740 0.1847 0.1954 0.2061 0.2167 0.2271 0.2375 0.2478 0.2579 0.2679 0.2777 0.2874 0.2969 0.3062 0.3154 0.3244 0.3333 0.3419 0.3504 0.3588 0.3670 0.3750 0.3828 0.3905 0.3981 0.4055 0.4127 0.4198 0.4268 0.4336 0.4402 0.4468 0.4532 0.4594 0.4656 0.4716 0.4775 0.4833 0.4889 0.4945

12
0.1294 0.1391 0.1490 0.1589 0.1688 0.1788 0.1887 0.1986 0.2084 0.2182 0.2278 0.2374 0.2469 0.2562 0.2655 0.2746 0.2835 0.2924 0.3011 0.3096 0.3180 0.3263 0.3344 0.3424 0.3503 0.3580 0.3655 0.3729 0.3802 0.3874 0.3944 0.4012 0.4080 0.4146 0.4211 0.4275 0.4337 0.4399 0.4459 0.4518

13
0.0926 0.1010 0.1097 0.1185 0.1275 0.1365 0.1457 0.1549 0.1641 0.1734 0.1826 0.1919 0.2010 0.2102 0.2192 0.2282 0.2371 0.2459 0.2546 0.2632 0.2717 0.2801 0.2883 0.2965 0.3045 0.3124 0.3202 0.3278 0.3353 0.3427 0.3500 0.3571 0.3642 0.3711 0.3779 0.3845 0.3911 0.3975 0.4038 0.4101

14
0.0635 0.0704 0.0777 0.0852 0.0929 0.1009 0.1090 0.1172 0.1256 0.1341 0.1426 0.1512 0.1598 0.1685 0.1772 0.1858 0.1944 0.2030 0.2115 0.2200 0.2284 0.2367 0.2449 0.2531 0.2611 0.2691 0.2770 0.2847 0.2924 0.2999 0.3074 0.3147 0.3219 0.3290 0.3360 0.3429 0.3497 0.3564 0.3629 0.3694

15
0.0416 0.0470 0.0527 0.0588 0.0651 0.0718 0.0786 0.0857 0.0930 0.1005 0.1081 0.1159 0.1237 0.1317 0.1397 0.1478 0.1559 0.1640 0.1722 0.1803 0.1884 0.1965 0.2046 0.2126 0.2205 0.2284 0.2362 0.2440 0.2516 0.2592 0.2667 0.2741 0.2814 0.2887 0.2958 0.3028 0.3098 0.3166 0.3233 0.3300

16
0.0259 0.0299 0.0342 0.0389 0.0438 0.0491 0.0546 0.0604 0.0665 0.0728 0.0793 0.0860 0.0929 0.1000 0.1072 0.1145 0.1219 0.1294 0.1370 0.1446 0.1523 0.1599 0.1676 0.1753 0.1830 0.1906 0.1983 0.2059 0.2134 0.2209 0.2283 0.2357 0.2430 0.2502 0.2574 0.2645 0.2715 0.2784 0.2853 0.2920

17
0.0154 0.0181 0.0212 0.0245 0.0282 0.0321 0.0364 0.0409 0.0457 0.0508 0.0561 0.0617 0.0675 0.0736 0.0798 0.0862 0.0927 0.0994 0.1062 0.1132 0.1202 0.1273 0.1344 0.1416 0.1489 0.1561 0.1634 0.1707 0.1780 0.1853 0.1925 0.1997 0.2069 0.2140 0.2211 0.2282 0.2351 0.2421 0.2489 0.2557

18
0.0087 0.0105 0.0125 0.0148 0.0173 0.0201 0.0232 0.0265 0.0302 0.0341 0.0383 0.0427 0.0474 0.0523 0.0574 0.0628 0.0684 0.0741 0.0801 0.0862 0.0924 0.0988 0.1052 0.1118 0.1185 0.1252 0.1320 0.1388 0.1457 0.1526 0.1595 0.1665 0.1734 0.1803 0.1872 0.1941 0.2009 0.2078 0.2145 0.2213

19
0.0047 0.0058 0.0070 0.0085 0.0101 0.0120 0.0141 0.0165 0.0191 0.0219 0.0250 0.0284 0.0320 0.0358 0.0399 0.0442 0.0488 0.0536 0.0585 0.0637 0.0690 0.0746 0.0802 0.0861 0.0920 0.0981 0.1042 0.1105 0.1168 0.1232 0.1297 0.1362 0.1428 0.1493 0.1559 0.1625 0.1691 0.1757 0.1823 0.1889

20
0.0024 0.0030 0.0038 0.0046 0.0057 0.0069 0.0082 0.0098 0.0116 0.0135 0.0157 0.0181 0.0207 0.0236 0.0267 0.0300 0.0336 0.0374 0.0414 0.0456 0.0500 0.0546 0.0594 0.0644 0.0696 0.0749 0.0803 0.0859 0.0915 0.0973 0.1032 0.1092 0.1153 0.1214 0.1275 0.1338 0.1400 0.1463 0.1526 0.1589

Erlangs Bformula E1,n (A)

614 Number of Servers n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

INDEX

0
0.2000 0.3333 0.4286 0.5000 0.5556 0.6000 0.6364 0.6667 0.6923 0.7143 0.7333 0.7500 0.7647 0.7778 0.7895 0.8000 0.8095 0.8182 0.8261 0.8333 0.8400 0.8462 0.8519 0.8571 0.8621 0.8667 0.8710 0.8750 0.8788 0.8824 0.8857 0.8889 0.8919 0.8947 0.8974 0.9000 0.9024 0.9048 0.9070 0.9091

1
0.0439 0.1282 0.2176 0.3000 0.3723 0.4345 0.4877 0.5333 0.5726 0.6065 0.6360 0.6618 0.6845 0.7046 0.7225 0.7385 0.7528 0.7658 0.7776 0.7883 0.7981 0.8070 0.8153 0.8229 0.8299 0.8364 0.8424 0.8481 0.8533 0.8583 0.8629 0.8672 0.8713 0.8751 0.8788 0.8822 0.8854 0.8885 0.8914 0.8942

2
0.0056 0.0321 0.0788 0.1375 0.2009 0.2640 0.3238 0.3789 0.4289 0.4738 0.5140 0.5498 0.5817 0.6103 0.6358 0.6587 0.6793 0.6979 0.7148 0.7301 0.7440 0.7567 0.7683 0.7790 0.7888 0.7979 0.8063 0.8141 0.8213 0.8281 0.8343 0.8402 0.8457 0.8509 0.8557 0.8603 0.8646 0.8686 0.8724 0.8761

3
0.0005 0.0055 0.0204 0.0471 0.0845 0.1296 0.1792 0.2306 0.2815 0.3306 0.3770 0.4201 0.4599 0.4964 0.5298 0.5601 0.5877 0.6128 0.6357 0.6566 0.6756 0.6930 0.7090 0.7236 0.7370 0.7494 0.7608 0.7714 0.7812 0.7903 0.7987 0.8066 0.8140 0.8208 0.8273 0.8333 0.8389 0.8443 0.8493 0.8540

4
0.0000 0.0007 0.0040 0.0123 0.0276 0.0507 0.0809 0.1171 0.1575 0.2005 0.2444 0.2882 0.3307 0.3716 0.4102 0.4465 0.4802 0.5116 0.5406 0.5674 0.5920 0.6148 0.6357 0.6550 0.6728 0.6892 0.7043 0.7184 0.7314 0.7434 0.7546 0.7650 0.7747 0.7838 0.7922 0.8001 0.8075 0.8145 0.8210 0.8271

5
0.0000 0.0001 0.0006 0.0026 0.0072 0.0160 0.0298 0.0492 0.0741 0.1037 0.1373 0.1737 0.2118 0.2507 0.2895 0.3276 0.3645 0.3998 0.4334 0.4651 0.4949 0.5227 0.5487 0.5729 0.5954 0.6163 0.6357 0.6537 0.6705 0.6861 0.7006 0.7141 0.7266 0.7383 0.7493 0.7595 0.7691 0.7781 0.7865 0.7944

6
0.0000 0.0000 0.0001 0.0004 0.0016 0.0042 0.0091 0.0173 0.0293 0.0456 0.0662 0.0909 0.1190 0.1501 0.1832 0.2177 0.2528 0.2880 0.3227 0.3566 0.3894 0.4209 0.4508 0.4792 0.5060 0.5313 0.5550 0.5772 0.5980 0.6175 0.6357 0.6527 0.6686 0.6835 0.6974 0.7104 0.7226 0.7340 0.7447 0.7547

7
0.0000 0.0000 0.0000 0.0001 0.0003 0.0009 0.0024 0.0052 0.0099 0.0172 0.0275 0.0412 0.0584 0.0790 0.1028 0.1293 0.1581 0.1885 0.2201 0.2524 0.2847 0.3168 0.3484 0.3791 0.4087 0.4372 0.4644 0.4903 0.5149 0.5382 0.5601 0.5808 0.6002 0.6185 0.6357 0.6518 0.6670 0.6813 0.6946 0.7072

8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0002 0.0005 0.0013 0.0029 0.0056 0.0099 0.0163 0.0251 0.0366 0.0510 0.0683 0.0885 0.1112 0.1361 0.1630 0.1912 0.2205 0.2503 0.2804 0.3104 0.3399 0.3689 0.3970 0.4243 0.4504 0.4755 0.4994 0.5222 0.5438 0.5643 0.5837 0.6021 0.6194 0.6357 0.6511

9
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0003 0.0007 0.0016 0.0032 0.0057 0.0095 0.0150 0.0224 0.0321 0.0442 0.0588 0.0759 0.0954 0.1170 0.1405 0.1656 0.1920 0.2193 0.2472 0.2754 0.3035 0.3314 0.3589 0.3857 0.4118 0.4371 0.4614 0.4847 0.5070 0.5283 0.5486 0.5679 0.5863

Erlang-B improvement function : F1,n (A) = A [E1,n (A) E1,n+1 (A)]

INDEX Number of Servers n A


10.25 10.50 10.75 11.00 11.25 11.50 11.75 12.00 12.25 12.50 12.75 13.00 13.25 13.50 13.75 14.00 14.25 14.50 14.75 15.00 15.25 15.50 15.75 16.00 16.25 16.50 16.75 17.00 17.25 17.50 17.75 18.00 18.25 18.50 18.75 19.00 19.25 19.50 19.75 20.00

615

0
0.9111 0.9130 0.9149 0.9167 0.9184 0.9200 0.9216 0.9231 0.9245 0.9259 0.9273 0.9286 0.9298 0.9310 0.9322 0.9333 0.9344 0.9355 0.9365 0.9375 0.9385 0.9394 0.9403 0.9412 0.9420 0.9429 0.9437 0.9444 0.9452 0.9459 0.9467 0.9474 0.9481 0.9487 0.9494 0.9500 0.9506 0.9512 0.9518 0.9524

1
0.8968 0.8993 0.9017 0.9040 0.9062 0.9083 0.9103 0.9122 0.9141 0.9158 0.9175 0.9191 0.9207 0.9222 0.9237 0.9251 0.9264 0.9277 0.9290 0.9302 0.9314 0.9325 0.9336 0.9347 0.9357 0.9367 0.9377 0.9386 0.9395 0.9404 0.9413 0.9421 0.9429 0.9437 0.9445 0.9453 0.9460 0.9467 0.9474 0.9481

2
0.8795 0.8828 0.8859 0.8888 0.8916 0.8943 0.8969 0.8993 0.9016 0.9038 0.9060 0.9080 0.9100 0.9119 0.9137 0.9154 0.9171 0.9187 0.9202 0.9217 0.9232 0.9246 0.9259 0.9272 0.9285 0.9297 0.9308 0.9320 0.9331 0.9341 0.9352 0.9362 0.9371 0.9381 0.9390 0.9399 0.9408 0.9416 0.9424 0.9432

3
0.8585 0.8627 0.8667 0.8705 0.8741 0.8775 0.8808 0.8838 0.8868 0.8896 0.8923 0.8949 0.8973 0.8997 0.9019 0.9041 0.9061 0.9081 0.9100 0.9119 0.9136 0.9153 0.9170 0.9185 0.9201 0.9215 0.9229 0.9243 0.9256 0.9269 0.9281 0.9293 0.9305 0.9316 0.9327 0.9338 0.9348 0.9358 0.9368 0.9377

4
0.8329 0.8383 0.8435 0.8483 0.8529 0.8573 0.8614 0.8653 0.8691 0.8726 0.8760 0.8792 0.8823 0.8852 0.8880 0.8907 0.8932 0.8957 0.8980 0.9003 0.9025 0.9045 0.9065 0.9085 0.9103 0.9121 0.9138 0.9155 0.9171 0.9186 0.9201 0.9215 0.9229 0.9243 0.9256 0.9268 0.9280 0.9292 0.9304 0.9315

5
0.8018 0.8088 0.8154 0.8216 0.8274 0.8330 0.8382 0.8432 0.8479 0.8523 0.8566 0.8606 0.8644 0.8681 0.8716 0.8749 0.8780 0.8811 0.8840 0.8867 0.8894 0.8919 0.8944 0.8967 0.8990 0.9011 0.9032 0.9052 0.9072 0.9090 0.9108 0.9125 0.9142 0.9158 0.9173 0.9188 0.9203 0.9217 0.9230 0.9244

6
0.7642 0.7731 0.7814 0.7893 0.7967 0.8037 0.8103 0.8165 0.8224 0.8280 0.8334 0.8384 0.8432 0.8477 0.8520 0.8562 0.8601 0.8638 0.8674 0.8708 0.8741 0.8772 0.8802 0.8830 0.8858 0.8884 0.8909 0.8933 0.8957 0.8979 0.9001 0.9021 0.9041 0.9060 0.9079 0.9097 0.9114 0.9131 0.9147 0.9162

7
0.7191 0.7302 0.7407 0.7505 0.7598 0.7686 0.7769 0.7847 0.7921 0.7991 0.8057 0.8119 0.8179 0.8235 0.8289 0.8340 0.8388 0.8434 0.8478 0.8520 0.8560 0.8598 0.8635 0.8670 0.8703 0.8735 0.8766 0.8795 0.8823 0.8850 0.8876 0.8901 0.8925 0.8948 0.8970 0.8992 0.9012 0.9032 0.9051 0.9070

8
0.6656 0.6793 0.6923 0.7045 0.7160 0.7268 0.7371 0.7468 0.7559 0.7646 0.7728 0.7805 0.7879 0.7948 0.8014 0.8077 0.8137 0.8194 0.8248 0.8299 0.8348 0.8395 0.8439 0.8482 0.8522 0.8561 0.8598 0.8634 0.8668 0.8700 0.8731 0.8761 0.8790 0.8818 0.8844 0.8870 0.8895 0.8918 0.8941 0.8963

9
0.6036 0.6201 0.6357 0.6505 0.6644 0.6777 0.6902 0.7020 0.7132 0.7238 0.7339 0.7434 0.7524 0.7609 0.7690 0.7767 0.7840 0.7910 0.7976 0.8038 0.8098 0.8155 0.8209 0.8261 0.8310 0.8357 0.8402 0.8445 0.8486 0.8526 0.8563 0.8599 0.8634 0.8667 0.8699 0.8729 0.8759 0.8787 0.8814 0.8840

Erlang-B improvement function : F1,n (A) = A [E1,n (A) E1,n+1 (A)]

616 Number of Servers n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

INDEX

10
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0004 0.0009 0.0018 0.0032 0.0055 0.0088 0.0135 0.0198 0.0280 0.0382 0.0505 0.0650 0.0816 0.1003 0.1209 0.1431 0.1668 0.1915 0.2172 0.2434 0.2699 0.2965 0.3230 0.3491 0.3748 0.3999 0.4243 0.4479 0.4706 0.4925 0.5135

11
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0005 0.0010 0.0018 0.0031 0.0051 0.0080 0.0120 0.0174 0.0242 0.0328 0.0432 0.0555 0.0698 0.0859 0.1038 0.1234 0.1445 0.1668 0.1901 0.2143 0.2391 0.2642 0.2894 0.3146 0.3396 0.3642 0.3884 0.4120 0.4349

12
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0003 0.0005 0.0010 0.0018 0.0030 0.0047 0.0072 0.0106 0.0151 0.0209 0.0281 0.0369 0.0473 0.0595 0.0734 0.0890 0.1061 0.1248 0.1448 0.1659 0.1881 0.2109 0.2344 0.2582 0.2823 0.3064 0.3303 0.3540

13
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003 0.0006 0.0010 0.0017 0.0027 0.0042 0.0064 0.0093 0.0131 0.0179 0.0240 0.0314 0.0403 0.0507 0.0626 0.0761 0.0911 0.1075 0.1254 0.1444 0.1645 0.1855 0.2072 0.2295 0.2522 0.2752

14
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003 0.0006 0.0010 0.0016 0.0025 0.0038 0.0056 0.0080 0.0112 0.0153 0.0205 0.0267 0.0342 0.0431 0.0533 0.0650 0.0780 0.0925 0.1082 0.1253 0.1434 0.1625 0.1825 0.2032

15
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003 0.0005 0.0009 0.0014 0.0022 0.0033 0.0049 0.0069 0.0096 0.0131 0.0174 0.0227 0.0290 0.0366 0.0453 0.0554 0.0667 0.0793 0.0932 0.1084 0.1246 0.1420

16
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003 0.0005 0.0008 0.0013 0.0020 0.0029 0.0042 0.0060 0.0082 0.0111 0.0148 0.0192 0.0246 0.0310 0.0385 0.0471 0.0569 0.0679 0.0801 0.0935

17
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0005 0.0008 0.0012 0.0017 0.0026 0.0037 0.0051 0.0070 0.0095 0.0125 0.0163 0.0208 0.0263 0.0326 0.0400 0.0485 0.0581

18
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0004 0.0007 0.0010 0.0015 0.0022 0.0031 0.0044 0.0060 0.0080 0.0106 0.0138 0.0176 0.0222 0.0276 0.0340

19
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0002 0.0004 0.0006 0.0009 0.0013 0.0019 0.0027 0.0037 0.0051 0.0068 0.0090 0.0116 0.0149 0.0188

Erlang-B improvement function : F1,n (A) = A [E1,n (A) E1,n+1 (A)]

INDEX Number of Servers n A


10.25 10.50 10.75 11.00 11.25 11.50 11.75 12.00 12.25 12.50 12.75 13.00 13.25 13.50 13.75 14.00 14.25 14.50 14.75 15.00 15.25 15.50 15.75 16.00 16.25 16.50 16.75 17.00 17.25 17.50 17.75 18.00 18.25 18.50 18.75 19.00 19.25 19.50 19.75 20.00

617

10
0.5336 0.5528 0.5710 0.5885 0.6050 0.6208 0.6357 0.6499 0.6634 0.6762 0.6883 0.6998 0.7107 0.7211 0.7309 0.7403 0.7492 0.7576 0.7656 0.7732 0.7805 0.7874 0.7940 0.8002 0.8062 0.8119 0.8173 0.8225 0.8275 0.8322 0.8367 0.8411 0.8452 0.8492 0.8530 0.8567 0.8602 0.8636 0.8668 0.8699

11
0.4571 0.4786 0.4992 0.5191 0.5381 0.5563 0.5738 0.5904 0.6062 0.6213 0.6357 0.6494 0.6625 0.6748 0.6866 0.6978 0.7085 0.7187 0.7283 0.7375 0.7462 0.7545 0.7625 0.7700 0.7772 0.7841 0.7906 0.7969 0.8028 0.8085 0.8140 0.8192 0.8241 0.8289 0.8335 0.8378 0.8420 0.8460 0.8499 0.8536

12
0.3773 0.4002 0.4225 0.4442 0.4652 0.4855 0.5051 0.5240 0.5421 0.5595 0.5762 0.5921 0.6073 0.6219 0.6357 0.6490 0.6616 0.6736 0.6851 0.6961 0.7065 0.7164 0.7259 0.7349 0.7435 0.7517 0.7596 0.7670 0.7742 0.7810 0.7875 0.7937 0.7997 0.8054 0.8108 0.8160 0.8210 0.8258 0.8304 0.8348

13
0.2982 0.3212 0.3441 0.3666 0.3888 0.4105 0.4317 0.4523 0.4723 0.4916 0.5103 0.5283 0.5457 0.5623 0.5783 0.5936 0.6083 0.6223 0.6357 0.6486 0.6608 0.6726 0.6837 0.6944 0.7046 0.7144 0.7237 0.7325 0.7410 0.7491 0.7569 0.7643 0.7714 0.7781 0.7846 0.7908 0.7967 0.8024 0.8078 0.8130

14
0.2245 0.2462 0.2682 0.2903 0.3124 0.3344 0.3562 0.3778 0.3989 0.4196 0.4398 0.4595 0.4786 0.4971 0.5150 0.5322 0.5488 0.5649 0.5802 0.5950 0.6092 0.6227 0.6358 0.6482 0.6601 0.6716 0.6825 0.6929 0.7029 0.7125 0.7216 0.7303 0.7387 0.7467 0.7544 0.7617 0.7687 0.7754 0.7819 0.7880

15
0.1602 0.1793 0.1991 0.2194 0.2402 0.2612 0.2825 0.3038 0.3251 0.3462 0.3671 0.3877 0.4080 0.4278 0.4471 0.4659 0.4842 0.5020 0.5191 0.5357 0.5517 0.5671 0.5820 0.5963 0.6100 0.6231 0.6358 0.6479 0.6595 0.6706 0.6813 0.6915 0.7013 0.7107 0.7197 0.7283 0.7365 0.7444 0.7520 0.7593

16
0.1080 0.1236 0.1402 0.1576 0.1759 0.1948 0.2142 0.2342 0.2544 0.2748 0.2954 0.3160 0.3365 0.3568 0.3769 0.3967 0.4161 0.4351 0.4537 0.4717 0.4893 0.5064 0.5229 0.5389 0.5543 0.5692 0.5836 0.5974 0.6107 0.6235 0.6358 0.6476 0.6589 0.6698 0.6802 0.6903 0.6999 0.7091 0.7179 0.7264

17
0.0687 0.0805 0.0934 0.1073 0.1223 0.1381 0.1548 0.1723 0.1904 0.2091 0.2282 0.2477 0.2674 0.2872 0.3072 0.3270 0.3468 0.3664 0.3857 0.4048 0.4234 0.4417 0.4596 0.4770 0.4939 0.5104 0.5263 0.5418 0.5567 0.5711 0.5850 0.5984 0.6114 0.6238 0.6358 0.6473 0.6584 0.6690 0.6792 0.6891

18
0.0413 0.0495 0.0588 0.0692 0.0806 0.0930 0.1063 0.1207 0.1359 0.1519 0.1686 0.1860 0.2039 0.2223 0.2411 0.2601 0.2793 0.2986 0.3179 0.3371 0.3562 0.3751 0.3938 0.4121 0.4301 0.4478 0.4650 0.4818 0.4982 0.5141 0.5295 0.5444 0.5589 0.5729 0.5864 0.5994 0.6120 0.6241 0.6358 0.6470

19
0.0234 0.0288 0.0350 0.0422 0.0503 0.0593 0.0693 0.0803 0.0922 0.1051 0.1189 0.1334 0.1488 0.1649 0.1816 0.1988 0.2165 0.2346 0.2530 0.2715 0.2902 0.3090 0.3277 0.3463 0.3648 0.3831 0.4011 0.4189 0.4363 0.4533 0.4700 0.4862 0.5020 0.5174 0.5324 0.5469 0.5609 0.5745 0.5876 0.6003

Erlang-B improvement function : F1,n (A) = A [E1,n (A) E1,n+1 (A)]

618 Blocking probability E n


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

INDEX

0.001
0.0010 0.0458 0.1938 0.4393 0.7621 1.1459 1.5786 2.0513 2.5575 3.0920 3.6511 4.2314 4.8305 5.4464 6.0772 6.7215 7.3781 8.0459 8.7239 9.4115 10.1077 10.8121 11.5241 12.2432 12.9689 13.7008 14.4385 15.1818 15.9304 16.6839 17.4420 18.2047 18.9716 19.7426 20.5174 21.2960 22.0781 22.8636 23.6523 24.4442

0.002
0.0020 0.0653 0.2487 0.5350 0.8999 1.3252 1.7984 2.3106 2.8549 3.4265 4.0215 4.6368 5.2700 5.9190 6.5822 7.2582 7.9457 8.6437 9.3514 10.0680 10.7929 11.5253 12.2649 13.0110 13.7634 14.5216 15.2852 16.0540 16.8277 17.6060 18.3887 19.1755 19.9663 20.7609 21.5591 22.3607 23.1656 23.9737 24.7847 25.5987

0.005
0.0050 0.1054 0.3490 0.7012 1.1320 1.6218 2.1575 2.7299 3.3326 3.9607 4.6104 5.2789 5.9638 6.6632 7.3755 8.0995 8.8340 9.5780 10.3308 11.0916 11.8598 12.6349 13.4164 14.2038 14.9968 15.7949 16.5980 17.4057 18.2177 19.0339 19.8539 20.6777 21.5050 22.3356 23.1694 24.0063 24.8461 25.6887 26.5340 27.3818

0.01
0.0101 0.1526 0.4555 0.8694 1.3608 1.9090 2.5009 3.1276 3.7825 4.4612 5.1599 5.8760 6.6072 7.3517 8.1080 8.8750 9.6516 10.4369 11.2301 12.0306 12.8378 13.6513 14.4705 15.2950 16.1246 16.9588 17.7974 18.6402 19.4869 20.3373 21.1912 22.0483 22.9086 23.7720 24.6381 25.5070 26.3785 27.2525 28.1288 29.0074

0.02
0.0204 0.2235 0.6022 1.0923 1.6571 2.2759 2.9354 3.6270 4.3447 5.0840 5.8415 6.6147 7.4015 8.2003 9.0096 9.8284 10.6558 11.4909 12.3330 13.1815 14.0360 14.8959 15.7609 16.6306 17.5046 18.3828 19.2648 20.1504 21.0394 21.9316 22.8268 23.7249 24.6257 25.5291 26.4349 27.3431 28.2536 29.1661 30.0808 30.9973

0.05
0.0526 0.3813 0.8994 1.5246 2.2185 2.9603 3.7378 4.5430 5.3702 6.2157 7.0764 7.9501 8.8349 9.7295 10.6327 11.5436 12.4613 13.3852 14.3147 15.2493 16.1885 17.1320 18.0795 19.0307 19.9853 20.9430 21.9037 22.8672 23.8333 24.8018 25.7726 26.7457 27.7207 28.6978 29.6767 30.6573 31.6397 32.6236 33.6090 34.5960

0.10
0.1111 0.5954 1.2708 2.0454 2.8811 3.7584 4.6662 5.5971 6.5464 7.5106 8.4871 9.4740 10.4699 11.4735 12.4838 13.5001 14.5217 15.5480 16.5786 17.6132 18.6512 19.6925 20.7367 21.7836 22.8331 23.8850 24.9390 25.9950 27.0529 28.1126 29.1740 30.2369 31.3013 32.3672 33.4343 34.5027 35.5722 36.6429 37.7147 38.7874

0.20
0.2500 1.0000 1.9299 2.9452 4.0104 5.1086 6.2302 7.3692 8.5217 9.6850 10.8570 12.0364 13.2218 14.4126 15.6079 16.8071 18.0098 19.2156 20.4241 21.6351 22.8484 24.0636 25.2807 26.4994 27.7196 28.9413 30.1643 31.3884 32.6137 33.8400 35.0672 36.2954 37.5244 38.7542 39.9847 41.2159 42.4478 43.6803 44.9134 46.1470

0.50
1.0000 2.7321 4.5914 6.5011 8.4369 10.3886 12.3505 14.3197 16.2942 18.2726 20.2541 22.2381 24.2240 26.2116 28.2005 30.1906 32.1816 34.1734 36.1660 38.1592 40.1530 42.1472 44.1418 46.1369 48.1322 50.1279 52.1239 54.1201 56.1165 58.1132 60.1100 62.1070 64.1042 66.1015 68.0990 70.0966 72.0943 74.0921 76.0900 78.0880

Erlangs Bformula E1,n (A) for xed Value of E

INDEX Blocking Probability E n


41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80

619

0.001
25.2391 26.0369 26.8374 27.6407 28.4466 29.2549 30.0657 30.8789 31.6943 32.5119 33.3316 34.1533 34.9771 35.8028 36.6305 37.4599 38.2911 39.1241 39.9587 40.7950 41.6328 42.4723 43.3132 44.1557 44.9995 45.8448 46.6915 47.5395 48.3888 49.2394 50.0913 50.9444 51.7987 52.6542 53.5108 54.3685 55.2274 56.0873 56.9483 57.8104

0.002
26.4155 27.2350 28.0570 28.8815 29.7085 30.5377 31.3692 32.2029 33.0387 33.8764 34.7162 35.5578 36.4013 37.2466 38.0936 38.9424 39.7927 40.6447 41.4982 42.3532 43.2097 44.0676 44.9270 45.7876 46.6497 47.5130 48.3776 49.2434 50.1104 50.9786 51.8480 52.7185 53.5901 54.4628 55.3365 56.2113 57.0871 57.9638 58.8416 59.7203

0.005
28.2321 29.0848 29.9397 30.7969 31.6561 32.5175 33.3807 34.2459 35.1129 35.9818 36.8523 37.7245 38.5983 39.4737 40.3506 41.2290 42.1089 42.9901 43.8727 44.7566 45.6418 46.5283 47.4160 48.3049 49.1949 50.0861 50.9783 51.8717 52.7661 53.6615 54.5579 55.4554 56.3537 57.2530 58.1533 59.0544 59.9564 60.8593 61.7630 62.6676

0.01
29.8882 30.7712 31.6561 32.5430 33.4317 34.3223 35.2146 36.1086 37.0042 37.9014 38.8001 39.7003 40.6019 41.5049 42.4092 43.3149 44.2218 45.1299 46.0392 46.9497 47.8613 48.7740 49.6878 50.6026 51.5185 52.4353 53.3531 54.2718 55.1915 56.1120 57.0335 57.9558 58.8789 59.8029 60.7276 61.6531 62.5794 63.5065 64.4343 65.3628

0.02
31.9158 32.8360 33.7580 34.6817 35.6069 36.5337 37.4619 38.3916 39.3227 40.2551 41.1889 42.1238 43.0600 43.9973 44.9358 45.8754 46.8160 47.7577 48.7004 49.6441 50.5887 51.5342 52.4807 53.4280 54.3762 55.3252 56.2750 57.2256 58.1770 59.1291 60.0820 61.0355 61.9898 62.9448 63.9004 64.8567 65.8136 66.7712 67.7293 68.6881

0.05
35.5843 36.5739 37.5648 38.5570 39.5503 40.5447 41.5403 42.5369 43.5345 44.5331 45.5326 46.5330 47.5343 48.5364 49.5394 50.5431 51.5477 52.5529 53.5589 54.5656 55.5730 56.5810 57.5897 58.5989 59.6088 60.6193 61.6304 62.6420 63.6541 64.6668 65.6800 66.6937 67.7079 68.7225 69.7377 70.7532 71.7693 72.7857 73.8026 74.8199

0.10
39.8612 40.9359 42.0114 43.0878 44.1650 45.2430 46.3218 47.4012 48.4813 49.5621 50.6435 51.7256 52.8082 53.8914 54.9751 56.0594 57.1441 58.2294 59.3151 60.4013 61.4880 62.5750 63.6625 64.7504 65.8387 66.9274 68.0164 69.1058 70.1956 71.2857 72.3761 73.4668 74.5579 75.6492 76.7409 77.8328 78.9250 80.0175 81.1103 82.2033

0.20
47.3812 48.6158 49.8509 51.0865 52.3225 53.5589 54.7957 56.0328 57.2703 58.5082 59.7463 60.9848 62.2236 63.4626 64.7019 65.9415 67.1813 68.4214 69.6617 70.9023 72.1430 73.3840 74.6251 75.8665 77.1080 78.3497 79.5916 80.8337 82.0759 83.3182 84.5608 85.8035 87.0463 88.2892 89.5323 90.7755 92.0189 93.2624 94.5060 95.7497

0.50
80.0861 82.0842 84.0825 86.0808 88.0792 90.0776 92.0761 94.0747 96.0733 98.0720 100.0707 102.0695 104.0683 106.0671 108.0660 110.0649 112.0639 114.0629 116.0619 118.0610 120.0600 122.0591 124.0583 126.0574 128.0566 130.0558 132.0551 134.0543 136.0536 138.0529 140.0522 142.0515 144.0508 146.0502 148.0496 150.0490 152.0484 154.0478 156.0472 158.0467

Erlangs Bformula E1,n (A) for xed Value of E

620 Number of Servers n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

INDEX

1
0.2500 0.5000 0.7500 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

2
0.0278 0.1000 0.2045 0.3333 0.4808 0.6429 0.8167 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

3
0.0022 0.0152 0.0441 0.0909 0.1555 0.2368 0.3337 0.4444 0.5678 0.7022 0.8467 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

4
0.0001 0.0018 0.0077 0.0204 0.0422 0.0746 0.1184 0.1739 0.2412 0.3199 0.4095 0.5094 0.6191 0.7379 0.8650 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

5
0.0000 0.0002 0.0011 0.0038 0.0097 0.0201 0.0364 0.0597 0.0908 0.1304 0.1788 0.2362 0.3026 0.3778 0.4618 0.5541 0.6545 0.7625 0.8778 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

6
0.0000 0.0000 0.0001 0.0006 0.0019 0.0047 0.0098 0.0180 0.0303 0.0474 0.0702 0.0991 0.1348 0.1775 0.2274 0.2848 0.3495 0.4217 0.5010 0.5875 0.6809 0.7809 0.8874 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

7
0.0000 0.0000 0.0000 0.0001 0.0003 0.0010 0.0023 0.0048 0.0090 0.0154 0.0248 0.0376 0.0546 0.0762 0.1029 0.1351 0.1731 0.2172 0.2675 0.3241 0.3871 0.4564 0.5320 0.6138 0.7017 0.7954 0.8949 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

8
0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0005 0.0011 0.0024 0.0045 0.0079 0.0129 0.0201 0.0299 0.0427 0.0590 0.0793 0.1039 0.1331 0.1673 0.2066 0.2512 0.3013 0.3570 0.4182 0.4850 0.5574 0.6353 0.7186 0.8073 0.9011 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

9
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0006 0.0012 0.0023 0.0040 0.0068 0.0107 0.0163 0.0238 0.0336 0.0460 0.0616 0.0805 0.1031 0.1298 0.1606 0.1960 0.2360 0.2807 0.3304 0.3849 0.4445 0.5091 0.5788 0.6533 0.7328 0.8171 0.9062 1.0000 1.0000 1.0000 1.0000 1.0000

10
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0003 0.0006 0.0012 0.0021 0.0035 0.0057 0.0088 0.0131 0.0189 0.0265 0.0361 0.0481 0.0628 0.0804 0.1013 0.1257 0.1537 0.1857 0.2217 0.2620 0.3066 0.3556 0.4092 0.4672 0.5299 0.5970 0.6687 0.7449 0.8256 0.9106 1.0000

Erlangs Cformula E2,n (A)

INDEX Number of Servers n A


10.25 10.50 10.75 11.00 11.25 11.50 11.75 12.00 12.25 12.50 12.75 13.00 13.25 13.50 13.75 14.00 14.25 14.50 14.75 15.00 15.25 15.50 15.75 16.00 16.25 16.50 16.75 17.00 17.25 17.50 17.75 18.00 18.25 18.50 18.75 19.00 19.25 19.50 19.75 20.00

621

1
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

2
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

3
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

4
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

5
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

6
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

7
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

8
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

9
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

10
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

Erlangs Cformula E2,n (A)

622 Number of Servers n A


0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 5.25 5.50 5.75 6.00 6.25 6.50 6.75 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 9.00 9.25 9.50 9.75 10.00

INDEX

11
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0003 0.0006 0.0011 0.0018 0.0030 0.0048 0.0072 0.0106 0.0151 0.0210 0.0284 0.0378 0.0492 0.0630 0.0795 0.0988 0.1211 0.1467 0.1758 0.2085 0.2450 0.2853 0.3296 0.3780 0.4305 0.4871 0.5479 0.6129 0.6821

12
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003 0.0006 0.0010 0.0016 0.0026 0.0039 0.0059 0.0085 0.0121 0.0166 0.0225 0.0298 0.0388 0.0496 0.0626 0.0779 0.0958 0.1164 0.1398 0.1664 0.1962 0.2294 0.2660 0.3063 0.3502 0.3979 0.4494

13
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0002 0.0003 0.0005 0.0008 0.0014 0.0021 0.0033 0.0048 0.0069 0.0096 0.0132 0.0178 0.0236 0.0306 0.0392 0.0495 0.0617 0.0760 0.0925 0.1115 0.1331 0.1575 0.1848 0.2151 0.2485 0.2853

14
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0003 0.0004 0.0007 0.0012 0.0018 0.0027 0.0039 0.0055 0.0077 0.0106 0.0142 0.0187 0.0243 0.0311 0.0393 0.0490 0.0605 0.0738 0.0892 0.1067 0.1267 0.1491 0.1741

15
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0004 0.0006 0.0010 0.0015 0.0022 0.0032 0.0045 0.0062 0.0084 0.0113 0.0149 0.0193 0.0247 0.0312 0.0390 0.0482 0.0590 0.0714 0.0857 0.1020

16
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0005 0.0008 0.0012 0.0018 0.0026 0.0036 0.0050 0.0068 0.0090 0.0119 0.0154 0.0197 0.0249 0.0312 0.0386 0.0472 0.0573

17
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0005 0.0007 0.0010 0.0015 0.0021 0.0029 0.0040 0.0054 0.0072 0.0095 0.0123 0.0157 0.0199 0.0249 0.0309

18
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0002 0.0004 0.0006 0.0008 0.0012 0.0017 0.0024 0.0032 0.0044 0.0058 0.0076 0.0098 0.0126 0.0159

19
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0001 0.0002 0.0003 0.0005 0.0007 0.0010 0.0014 0.0019 0.0026 0.0035 0.0046 0.0061 0.0079

20
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001 0.0002 0.0003 0.0004 0.0006 0.0008 0.0011 0.0015 0.0021 0.0028 0.0037

Erlangs Cformula E2,n (A)

INDEX Number of Servers n A


10.25 10.50 10.75 11.00 11.25 11.50 11.75 12.00 12.25 12.50 12.75 13.00 13.25 13.50 13.75 14.00 14.25 14.50 14.75 15.00 15.25 15.50 15.75 16.00 16.25 16.50 16.75 17.00 17.25 17.50 17.75 18.00 18.25 18.50 18.75 19.00 19.25 19.50 19.75 20.00

623

11
0.7555 0.8329 0.9144 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

12
0.5047 0.5639 0.6270 0.6939 0.7647 0.8393 0.9178 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

13
0.3253 0.3688 0.4158 0.4664 0.5205 0.5782 0.6395 0.7044 0.7729 0.8451 0.9208 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

14
0.2019 0.2326 0.2662 0.3029 0.3428 0.3858 0.4321 0.4817 0.5347 0.5910 0.6507 0.7138 0.7803 0.8502 0.9234 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

15
0.1205 0.1412 0.1642 0.1898 0.2180 0.2489 0.2826 0.3192 0.3587 0.4013 0.4469 0.4957 0.5476 0.6026 0.6609 0.7223 0.7870 0.8548 0.9258 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

16
0.0690 0.0823 0.0975 0.1145 0.1337 0.1550 0.1786 0.2046 0.2331 0.2641 0.2978 0.3343 0.3735 0.4156 0.4606 0.5085 0.5594 0.6133 0.6702 0.7301 0.7930 0.8590 0.9280 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

17
0.0379 0.0461 0.0556 0.0665 0.0789 0.0930 0.1089 0.1266 0.1464 0.1682 0.1922 0.2185 0.2472 0.2783 0.3120 0.3483 0.3872 0.4288 0.4731 0.5203 0.5702 0.6230 0.6787 0.7372 0.7986 0.8628 0.9300 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

18
0.0200 0.0248 0.0304 0.0371 0.0448 0.0538 0.0640 0.0756 0.0888 0.1035 0.1200 0.1383 0.1585 0.1808 0.2052 0.2317 0.2605 0.2917 0.3253 0.3613 0.3999 0.4410 0.4848 0.5312 0.5803 0.6320 0.6865 0.7437 0.8037 0.8664 0.9318 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

19
0.0101 0.0128 0.0160 0.0199 0.0245 0.0299 0.0362 0.0435 0.0519 0.0615 0.0724 0.0847 0.0984 0.1138 0.1308 0.1496 0.1702 0.1928 0.2175 0.2442 0.2731 0.3043 0.3378 0.3736 0.4118 0.4525 0.4956 0.5413 0.5896 0.6404 0.6938 0.7498 0.8084 0.8696 0.9335 1.0000 1.0000 1.0000 1.0000 1.0000

20
0.0049 0.0063 0.0081 0.0103 0.0129 0.0160 0.0197 0.0241 0.0293 0.0353 0.0422 0.0501 0.0591 0.0692 0.0807 0.0936 0.1079 0.1237 0.1412 0.1604 0.1814 0.2043 0.2292 0.2561 0.2850 0.3162 0.3495 0.3851 0.4229 0.4632 0.5058 0.5508 0.5982 0.6481 0.7005 0.7554 0.8128 0.8727 0.9351 1.0000

Erlangs Cformula E2,n (A)

You might also like