Abstract
Meeting scheduling (MS) is a practical task in everyday life that involves independent agents with different calendars and preferences. In this paper, we consider the distributed MS problem where the host exchanges private information with each attendee separately. Since each agent aims to protect its own privacy and attend the meeting at a time slot that it prefers, it is necessary to design a distributed scheduling mechanism where the privacy leakage can be minimized and as many agents are satisfied with the outcome as possible. To achieve this, we propose an intelligent two-layer mechanism based on contract theory where the host motivates each agent to reveal its true preferences by providing different rewards without knowing the costs of each agent to attend the meeting. We first model the privacy leakage by measuring the difference between the revealed information of an agent’s calendar and other agents’ prior beliefs. An optimal control problem is then formulated such that the reward function and privacy leakage level can be jointly designed for each agent. Through theoretical analysis, we show that our proposed mechanism guarantees the incentive compatibility with respect to all agents. Compared to the state of the art, empirical evaluations show that our proposed mechanism achieves lower privacy leakage and higher social welfare within a small number of rounds.
This work was supported and funded by Samsung Electronics R&D Institute UK (SRUK).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
An agent’s availability at each time slot reflects its preference over the time slots, which can be categorised as multiple types, such as ‘free’, ‘OK with it’, and ‘busy’.
- 2.
It is worth noting that contract theory is different from the contract network protocol. The latter is a type of negotiation-based mechanism for task assignment.
- 3.
This can be extended to more types, but we use three to keep the example simple.
- 4.
Here we assume that every agent cares about its privacy to the same degree, which also makes sense in reality.
- 5.
We do not compare with other works here since each of them has a different metric of incentive costs, which is not compatible to ours.
References
Sen, S., Durfee, E.H.: A formal study of distributed meeting scheduling. Group Decis. Negot. 7(3), 265–289 (1998)
Bolton, P., Dewatripont, M.: Contract Theory. MIT Press, Cambridge (2005)
Lee, Y.: Online Membership Incentive System & Method. U.S. Patent Application No. 13/503,831 (2012)
Li, Z., Yang, Z., Xie, S., Chen, W., Liu, K.: Credit-based payments for fast computing resource trading in edge-assisted Internet of Things. IEEE Internet Things J. 6(4), 6606–6617 (2019)
Wang, J., Li, M., He, Y., Li, H., Xiao, K., Wang, C.: A blockchain based privacy-preserving incentive mechanism in crowdsensing applications. IEEE Access 6, 17545–17556 (2018)
Jennings, N.R., Jackson, A.J.: Agent based meeting scheduling: a design and implementation. IEEE Electron. Lett. 31(5), 350–352 (1995)
Crawford, E., Veloso, M.: Learning to select negotiation strategies in multi-agent meeting scheduling. In: Bento, C., Cardoso, A., Dias, G. (eds.) EPIA 2005. LNCS (LNAI), vol. 3808, pp. 584–595. Springer, Heidelberg (2005). https://doi.org/10.1007/11595014_57
Farinelli, A., Rogers, A., Petcu, A., Jennings, N.R.: Decentralised coordination of low-power embedded devices using the Max-Sum algorithm. In: Seventh International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-08), pp. 639–646 (2008)
Hassine, A.B., Ho, T.B., Ito, T.: Meetings scheduling solver enhancement with local consistency reinforcement. Appl. Intell. 24(2), 143–154 (2006)
Tanaka, T., Farokhi, F., Langbort, C.: A faithful distributed implementation of dual decomposition and average consensus algorithms. In: 52nd IEEE Conference on Decision and Control (2013)
Itoh, H.: Incentives to help in multi-agent situations. Econometrica 59(3), 611–636 (1999)
Di, B., Wang, T., Han, Z., Song, L.: Collaborative smartphone sensing using overlapping coalition formation games. IEEE Trans. Mob. Comput. 16(1), 30–43 (2017)
Xu, L., Jiang, C., Chen, Y., Ren, Y., Liu. K.J.: Privacy or utility in data collection? A contract theoretic approach. IEEE J. Sel. Top. Signal Process. 9(7), 1256–1269 (2015)
Bryon, A.E.: Applied Optimal Control: Optimization, Estimation and Control, 1st edn. Taylor & Francis Group, New York (1975)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Appendix: Proof of Proposition 1
According to (11c), for any (\(d, \hat{d}\)), the following inequalities hold:
When \(f_O\left( d\right) = f_O(\hat{d})\), adding (20a) and (20b) yields
The sufficient conditions to satisfy (21) are that both \(R_F\left( d \right) \) and \(N_p\left( d\right) \) are non-increasing functions. Similarly, by fixing \(f_F\left( d\right) = f_F(\hat{d})\), we can learn that \(R_O\left( d \right) \) is also a non-increasing one, which verifies (15a) and (15b) .
Given d, (20a) implies that the function \(y(\hat{d}) = f_F\left( d\right) \left( R_F(\hat{d}) -\alpha N_p (\hat{d})\right. \left. L(F)\right) \) \(+ f_O\left( d\right) \left( R_O(\hat{d}) - \alpha N_p(\hat{d})L(O)\right) \) reaches its maximum at \(\hat{d} = d\). We have
Since it holds for all \(f_F(d), f_O(d) > 0\), we have (15c).
B Appendix: Proof of Proposition 2
The first and second conditions stated in Definition 1 are guaranteed by constraints (11) and (13) . We now look into whether the third condition holds.
An attendee i may claim to be busy at an OK time slot t so as to mislead the host to schedule the meeting at time slot \(t'\) in which it is free. In this case, the desired probability that attendee i wishes others to believe is \(p_{0 \rightarrow i}^d\left( t\right) = 0\). Note that slot \(t'\) is a time slot that has not been proposed yet. We consider one case that slot \(t'\) is a time slot that has not been proposed yet. The expected utility of responder i when lying and telling the truth are separately given by
where \(p^{lie}\) and \(p^{tr}\) are the probability that the meeting can not be successfully scheduled in two cases, respectively. \(p_b\) is the host’s prior belief of the free probability of the attendee. In (23a), \(\alpha \left( p_b -1\right) \) represents the privacy leakage reporting to be free at time slot \(t'\), and \(\alpha \left( 2p_O - p_b\right) \) represents the protected privacy information by lying. In (23b), \(\alpha \left( p_b -p_O\right) \) is the privacy leakage at time slot t, and \(\theta \left( 1 - p_b\right) \) is the protected privacy information of slot \(t'\).
Since \(p_O \le 0.5\) and \(p^{lie} > p^{tr}\), we have \(E\left[ U_i^{tru}\right] > E\left[ U_i^{lie} \right] \). Therefore, the attendee has no motivation to lie, which verifies the third condition.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Di, B., Jennings, N.R. (2020). Privacy-Preserving Dialogues Between Agents: A Contract-Based Incentive Mechanism for Distributed Meeting Scheduling. In: Bassiliades, N., Chalkiadakis, G., de Jonge, D. (eds) Multi-Agent Systems and Agreement Technologies. EUMAS AT 2020 2020. Lecture Notes in Computer Science(), vol 12520. Springer, Cham. https://doi.org/10.1007/978-3-030-66412-1_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-66412-1_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-66411-4
Online ISBN: 978-3-030-66412-1
eBook Packages: Computer ScienceComputer Science (R0)