Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Locating the Parameters of RBF Networks Using a Hybrid Particle Swarm Optimization Method
Previous Article in Journal
RUemo—The Classification Framework for Russia-Ukraine War-Related Societal Emotions on Twitter through Machine Learning
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

On the Moments of the Number of Hires in the Assistant Hiring Algorithm

Electrical and Computer Engineering Department, The University of New Mexico, Albuquerque, NM 87131, USA
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(2), 70; https://doi.org/10.3390/a16020070
Submission received: 9 November 2022 / Revised: 8 January 2023 / Accepted: 19 January 2023 / Published: 21 January 2023

Abstract

:
We find closed-form expressions for the variance and the third moment of the number of hires in the assistant hiring algorithm, as well as asymptotic values for higher moments of this variable.

1. Introduction

Cormen et al.’s book [1] is a widely used textbook in colleges worldwide and is a standard reference for researchers. When discussing the probabilistic analysis of algorithms in chapter 5, the authors introduce the assistant hiring algorithm, where n candidates, with rankings from 1 (worst) to n (best), scrambled in random order, are interviewed to fill an assistant position. Every time a candidate is better (i.e., has a higher ranking) than the current one, (s)he is hired and the current one is fired. The first applicant is always hired. Under the hypothesis that all n ! permutations of the rankings are equally likely, using indicator random variables and the linearity of the expected value, the authors compute the expected number of hires as
E H = 1 + 1 2 + 1 3 + + 1 n .
The sum above, the n-th harmonic number, will be denoted by H n ( 1 ) . More generally, we will use the notation
H n ( k ) = 1 + 1 2 k + + 1 n k , k 1 .
The variance in the number of hires in the assistant hiring algorithm, a relevant parameter that provides a measure of the spread of the H values around its expectation, is not computed in [1]. In his seminal book [2], Donald Knuth also introduces a probabilistic analysis of algorithms by looking at the same assistant hiring problem, but refers to this with a less playful name: Algorithm M (finding the maximum: page 96 and the following). He obtains both the expectation and the variance, referring the reader to [3] for the asymptotic values of higher moments, although in this article they base their analysis of the higher moments on a further reference on power series with a finite number of singularities: [4]. In both [2] and [3], they find difference equations for the probability mass function of H and derive formulas for the factorial moment generating function M X ( t ) = E t X of H through some computations involving Stirling numbers and Lagrange’s method for solving differential equations.
In this self-contained note, we revisit the assistant hiring algorithm, which essentially looks at the number of times the value of the current maximum is updated when finding the maximum element of a random permutation. Our approach is different, in that we do not focus on the probability mass function of H, and we believe it is simpler. We first prove that H can be expressed as the sum of independent indicator variables, using a well-known combinatorial identity, the hockey stick identity. Then, we obtain a reasonably compact moment generating function M H ( t ) = E e t X that allows for us to find exact expressions for its first three moments, as well as asymptotic values for its higher moments, using simple differentiation and an inductive argument.

2. The Results

Let H be the number of hires. Then, we can write
H = H 1 + H 2 + + H n ,
where H i = 1 in case the candidate i is hired, and H i = 0 otherwise. It is not clear that H i and H j are independent random variables: if i < j , a possible heuristic would say that the hiring of candidate j in the future should not affect the hiring of the earlier candidate i, but we can turn this heuristic around and say that the hiring of candidate i implies a large value for the ranking for this candidate, possibly affecting the occurrence of large rankings for future candidate j. Heuristics aside, we argue that the joint behavior of the candidates i and j, i < j is only determined by the rankings r 1 , r 2 , , r j , and if we sort them in increasing order z 1 z 2 . . . z j , and we identify z k with k, we can simply work with the subarray of rankings 1 , 2 , , j . In particular,
P ( H j = 1 ) = ( j 1 ) ! j ! = 1 j ,
the reason being that, in order for candidate j to be hired, their ranking should be j and all previous candidates may have any of the remaining j 1 rankings. From this, we conclude that E H j = P ( H j = 1 ) = 1 j . This fact, and the linearity of the expectation, imply that taking expectations on both sides of (2) yields (1). With these preliminaries, we can prove the following.
Proposition 1.
The variables H i and H j , i < j , are independent.
Proof. 
We will prove that P ( H i = 1 , H j = 1 ) equals P ( H i = 1 ) P ( H j = 1 ) = 1 i j . Indeed,
P ( H i = 1 , H j = 1 ) = # o f f a v o r a b l e o u t c o m e s j ! .
To count the number of favorable outcomes, first, we argue that in order for candidate j to be hired, they must have ranking j. The possible values α for the ranking of candidate i are α = i , i + 1 , , j 1 , and the reason for this is that if we select α < i , we will force a ranking greater than or equal to i in the first i 1 candidates, thus precluding the hiring of candidate i. Once we assign the ranking α to candidate i, there are α 1 possible rankings that could be distributed among the first i 1 candidates in
( α 1 ) ( α 2 ) ( α i + 1 ) = ( α 1 ) ! ( α i ) !
ways. Finally, the remaining ( j i 1 ) rankings for candidates i + 1 through j 1 can be assigned in ( j i 1 ) ! ways. Putting all these facts together, we obtain
P ( H i = 1 , H j = 1 ) = 1 j ! α = i j 1 ( α 1 ) ! ( α i ) ! ( j i 1 ) ! = ( j i 1 ) ! j ! α = i j 1 ( α 1 ) ! ( α i ) ! .
After multiplying and dividing by i ! , and with some algebraic manipulations, we obtain that the previous probability can be written as
= ( j i 1 ) ! i ! i j ( i 1 ) ! ( j 1 ) ! α = i j 1 ( α 1 ) ! ( α i ) ! = 1 i j j 1 i α = i j 1 α 1 i 1 .
However,
α = i j 1 α 1 i 1 = j 1 i ,
by the well-known hockey stick identity (also known as Chu’s theorem; see [5]) and, therefore, P ( H i = 1 , H j = 1 ) = 1 i j . It is trivial to see that this is enough to prove the independence of the variables, that is, P ( H i = a , H j = b ) = P ( H i = a ) P ( H j = b ) , where a and b can be arbitrarily chosen as either 0 or 1. □
As the variables H i are pairwise independent, a simple computation of V a r ( H ) follows. Indeed, using the formula for the variance of the sum, and since the independence of two variables implies that their covariance vanishes, we obtain
V a r ( H ) = i = 1 n V a r ( H i ) + 2 i < j C o v ( H i , H j ) = i = 1 n V a r ( H i ) = H n ( 1 ) H n ( 2 ) .
The previous derivation of the variance is simple enough to find its way into an introductory textbook on the subject. However, to find higher orders of H, we will need to resort to the moment generating function of the random variable, and first obtain the following.
Proposition 2.
The variables H i 1 , H i 2 , …, H i s , i 1 < i 2 < < i s , are independent.
Proof. 
Let us prove that H i , H j , H k are independent for i < j < k . Again, since these variables only take two values, 0 and 1, it is enough to show that
P ( H i = H j = H k = 1 ) = P ( H i = 1 ) P ( H j = 1 ) P ( H k = 1 ) = 1 i j k .
Arguing as before, we can restrict our attention to the rankings 1 , 2 , , k . Candidate k must obtain the ranking k; thus, the possibles values of the ranking α for candidate j are { j , j + 1 , , k 1 } . Once the rank α is fixed, it is clear that the possible values β for the ranking of candidate i are { i , i + 1 , , α 1 } . Once we fix β , there are β 1 possible rankings to be placed between candidates 1 and i 1 in ( β 1 ) ! ( β i ) ! ways. Next, between candidates i + 1 and j 1 , we can place α i 1 available rankings in ( α i 1 ) ! ( α j ) ! ways. Finally, between candidates j + 1 and k 1 we can place the remaining k j 1 unassigned rankings in ( k j 1 ) ! ways. Then, we can write
P ( H i = H j = H k = 1 ) = ( k j 1 ) ! k ! α = j k 1 β = i α 1 ( β 1 ) ! ( β i ) ! ( α i 1 ) ! ( α j ) ! .
Multiplying and dividing by ( i 1 ) ! , and taking this term into the inner summation, we can see that the above probability equals
( k j 1 ) ! ( i 1 ) ! k ! α = j k 1 β = i α 1 β 1 i 1 ( α i 1 ) ! ( α j ) !
= ( k j 1 ) ! ( i 1 ) ! k ! α = j k 1 α 1 i ( α i 1 ) ! ( α j ) ! ,
where the last equality holds by the hockey stick identity. Expanding the binomial coefficient and some cancelling yields that the previous expression equals
( k j 1 ) ! k ! 1 i α = j k 1 ( α 1 ) ! ( α j ) ! .
Multiplying and dividing by ( j 1 ) ! and applying in the hockey stick identity again renders the previous expression equal to
( k j 1 ) ! ( j 1 ) ! k ! i α = j k 1 α 1 j 1 = ( k j 1 ) ! ( j 1 ) ! k ! i k 1 j = 1 i j k ,
as desired.
It is clear that we can generalize the above argument to prove that the variables H i 1 , H i 2 , , H i s are independent for any choice of subindices i 1 , i 2 , , i s : once we have written the probability P ( H i 1 = H i 2 = = H i s = 1 ) as an expression involving s 1 summations, we can work our way from the innermost to the outermost summation, applying the hockey stick identity s 1 times. □
The two-valued random variable H i has a simple moment generating function (m.g.f.):
M H i ( t ) = e t + i 1 i ,
and since the H i s are independent, the m.g.f of H is the product of the m.g.f.s of the H i s, that is,
M H ( t ) = i = 1 n e t + i 1 i .
The relatively simple expression for the m.g.f of H is a direct consequence of the independence of the H i . Without this fact, finding the m.g.f. of H would be a much harder task. Now, in principle, we could obtain the moments of H by taking the derivatives of M H and evaluating them at t = 0 . The formulas become more involved after a few differentiations, but we can obtain some asymptotics for E H k . We first note that
M H ( t ) = M H ( t ) e t 1 e t + 1 e t + 1 + + 1 e t + n 1 ,
and
M H ( t ) = M H ( t ) + M H ( t ) e t 1 e t + 1 e t + 1 + + 1 e t + n 1 M H ( t ) e 2 t 1 ( e t ) 2 + 1 ( e t + 1 ) 2 + + 1 ( e t + n 1 ) 2 . ,
By evaluating (4) and (5) at t = 0 , we obtain
E H = H n ( 1 )
and
E H 2 = ( H n ( 1 ) + 1 ) H n ( 1 ) H n ( 2 ) ,
Therefore, we recover (1) and the variance as
V a r ( H ) = E H 2 ( E H ) 2 = H n ( 1 ) H n ( 2 ) .
Another differentiation of (5) yields
M ( t ) = M H ( t ) + 2 M H ( t ) + M H ( t ) e t 1 e t + + 1 e t + n 1
M ( t ) + 2 M H ( t ) e 2 t 1 ( e t ) 2 + + 1 ( e t + n 1 ) 2
+ M H ( t ) 2 e 3 t 1 ( e t ) 3 + + 1 ( e t + n 1 ) 3 .
Evaluating at t = 0 , we obtain
E H 3 = M H ( 0 ) = E H 2 + 2 E H + 1 H n ( 1 ) 2 E H + 3 H n ( 2 ) + 2 H n ( 3 )
= H n ( 1 ) 3 + 3 H n ( 1 ) 2 3 H n ( 1 ) H n ( 2 ) + H n ( 1 ) 3 H n ( 2 ) + 2 H n ( 3 ) .
It is well-known (see [1]) that H n ( 1 ) = ln n + O ( 1 ) , or, in a more compact and somewhat less informative way, H n ( 1 ) = Θ ( ln n ) , where the constants involved in the upper and lower bounds are both equal to 1. As the numbers H n ( k ) , for k 2 , are partial sums of convergent series, we see that the dominating term there is H n ( 1 ) 3 and, thus, E H 3 = Θ ( ( ln n ) 3 ) .
By inspecting Formula (6) and proceeding recursively, we see that the k-th derivative of M H ( t ) can be written as:
M H ( k ) ( t ) = e t M H ( k 1 ) ( t ) i = 0 n 1 1 e t + i + j = 0 k 2 G j ( t ) M H ( j ) ( t ) ,
where the G j ( t ) are functions involving powers of e t and sums of inverse powers of e t + i , 0 i n 1 . After evaluating at t = 0 , this becomes
E H k = M H ( k ) ( 0 ) = M H ( k 1 ) ( 0 ) H n ( 1 ) + j = 0 k 2 G j ( 0 ) M H ( j ) ( 0 ) ,
or
E H k = E H k 1 H n ( 1 ) + j = 0 k 2 G j ( 0 ) E H j .
The values G j ( 0 ) involve either partial sums of convergent series, or H n ( 1 ) , and inside the summation there are only moments E H j for j k 2 . Thus, a simple inductive argument tells us that the dominant term in (8) is E H k 1 H n ( 1 ) , or, in other words,
E H k = Θ ( ( ln n ) k ) ,
for k 1 , where the constants involved in the upper and lower bounds are both equal to 1.
Remark 1.
The assistant hiring algorithm is a basic prototype of the so-called secretary problem, where the hiring strategy may be more involved than interviewing all candidates and being allowed to hire a candidate every time the current one is better than the preceding ones. Typically, in the secretary problem, only one hire is allowed. Candidates are interviewed individually, and the current candidate can be hired or rejected. If rejected, they cannot be summoned again in the future. The desired outcome is an optimal strategy that maximizes the probability of selecting the best candidate. Other problems target the second best candidate, the best k out of n, etc. For more details of these generalizations, the reader may consult references [6,7].

3. Conclusions

In this note, we studied the random variable that counts the number of hires in the assistant hiring algorithm, or, in other words, the number of updates to the current maximum when searching for the maximum element in a random permutation of an array of real numbers. We found exact expressions for its first three moments, and asymptotic values for its higher moments. The technique was to write the random variable under consideration as the sum of indicator variables that take the value 1 if a hire is carried out in the i-th interview (or, in other words, if an update of the current maximum is performed after the inspection of the i-th number), and 0 otherwise. An application of the hockey stick identity showed that these indicator variables are independent, implying that the moment generating function of the variable under consideration equals the product of the simple moment generating functions of the indicator variables. We obtained thus a reasonable expression for the moment generating function, which differentiated and evaluated at 0 yielded that the k-th moment of the variable under study, asymptotically, is the k-th power of the natural logarithm. We believe that our method, which does not rely on explicit knowledge of the probability mass function of the variable under study, is simpler than other treatments of the algorithm found in the literature. The basic idea of decomposing a random variable into the sum of indicator random variables is well-known, and is used extensively for the analysis of algorithms, at least for computation of the expected values, in textbooks such as [1]. Due to the linearity of the expectation, the expected value of the original variable is simply the sum of the expected values of the indicator variables, which, in turn, is the probability that the indicator variables take the value 1. However, we must warn that the computation of the variance, or that of higher moments, can be a difficult task unless the indicator variables are independent, as was the case in the assistant hiring algorithm.

Author Contributions

L.K. and J.L.P. contributed an equal share of the main ideas, the proofs and the preparation of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Chair of the Department of Electrical and Computer Engineering, University of New Mexico.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms, 3rd ed.; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  2. Knuth, D. The Art of Computer Programming, 3rd ed.; Addison-Wesley: Reading, MA, USA, 1997; Volume 1. [Google Scholar]
  3. Roes, P.B.M. A note on Linear Programming algorithm design: A combinatorial problem. Commun. ACM 1966, 9, 340–342. [Google Scholar] [CrossRef]
  4. Narumi, S. On a power series having only a finite number of algebraico-logarithmic singularities on its circle of convergence. Tohoku Math. J. 1929, 30, 185–201. [Google Scholar]
  5. Merris, R. Combinatorics, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2003. [Google Scholar]
  6. Ferguson, T.S. Who solved the secretary problem? Stat. Sci. 1989, 4, 282–289. [Google Scholar] [CrossRef]
  7. Vanderbei, R.J. The postdoc variant of the secretary problem. Math. Appl. Ann. Soc. Math. Pol. 2021, 49, 3–13. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, L.; Palacios, J.L. On the Moments of the Number of Hires in the Assistant Hiring Algorithm. Algorithms 2023, 16, 70. https://doi.org/10.3390/a16020070

AMA Style

Kim L, Palacios JL. On the Moments of the Number of Hires in the Assistant Hiring Algorithm. Algorithms. 2023; 16(2):70. https://doi.org/10.3390/a16020070

Chicago/Turabian Style

Kim, Leeseok, and José Luis Palacios. 2023. "On the Moments of the Number of Hires in the Assistant Hiring Algorithm" Algorithms 16, no. 2: 70. https://doi.org/10.3390/a16020070

APA Style

Kim, L., & Palacios, J. L. (2023). On the Moments of the Number of Hires in the Assistant Hiring Algorithm. Algorithms, 16(2), 70. https://doi.org/10.3390/a16020070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop