Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2940716.2940719acmconferencesArticle/Chapter ViewAbstractPublication PagesecConference Proceedingsconference-collections
abstract

Economic Recommendation Systems: One Page Abstract

Published: 21 July 2016 Publication History

Abstract

In the on-line Explore & Exploit [E&E] literature, central to Machine Learning, a central planner is faced with a set of alternatives, each yielding some unknown reward. The planner's goal is to learn the optimal alternative as soon as possible, via experimentation. A typical assumption in this model is that the planner has full control over the experiment design and implementation. When experiments are implemented by a society of self-motivated agents the planner can only recommend experimentation but has no power to enforce it. The first paper to marry the social aspects with the challenge of E&E, a new research domain for which we coin the term "social explore and exploit", is Kremer et. al. [Kremer et al. 2014]. In that work the authors introduce a naive setting (We use the notion of a "naive setting" for settings where the optimal non-social explore and exploit scheme is trivial - try all actions sequentially, each once, and settle on the optimal one thereafter) and study optimal explore and exploit schemes that account for agents' incentives.
To be more specific, [Kremer et al. 2014] identify an incentive compatible scheme with which a central planner can asymptotically steer the users towards taking the optimal action.
Whereas [Kremer et al. 2014] account for agents' incentives and in particular the misalignment of incentives of the agents and the planner it ignore other societal aspects. In particular, [Kremer et al. 2014] make an implicit assumption that agents cannot see nor communicate with any other agent. It turns out that, when observability is factored in, the scheme proposed by Kremer et. al. is no longer incentive compatible, leading to market failure.
In this work we introduce observability into the framework of social E&E. We study the design of recommendation systems when agents can (partly) observe each other. In particular, we investigate the conditions on the social network which allow for asymptotically optimal outcomes. Thus, we extend [Kremer et al. 2014] by adding the additional layer of a social network and show conditions under which the essence of their results, albeit with a different mechanism, can still be maintained even though agents may observe each other. Intuitively, the more agents can see each other the less power resides within the central planner.
Formally, let a visibility graph over N agents be a graph where agents serve as the nodes, and an edge (a; b) implies that agents a and b can observe each other's action. A visibility graph is an (αβ)-graph if the number of nodes with degree greater than Nα is bounded by Nβ. Our main result is that for a sufficiently large N, if the visibility graph is an (αβ)-graph, where 2α + β < 1, then there exists a deterministic incentive-compatible algorithm leading to approximately optimal outcome.
On the other hand we show that for the complete graph asymptotically optimal outcome can not be obtained by any probabilistic incentive-compatible algorithm. As the complete graph is a (0; 1)-graph, our result is tight.

Reference

[1]
Ilan Kremer, Yishay Mansour, and Motty Perry. Implementing the wisdom of the crowd. Journal of Political Economy, 122:988--1012, 2014.

Cited By

View all
  • (2022)Designing an Automatic Agent for Repeated Language–based Persuasion GamesTransactions of the Association for Computational Linguistics10.1162/tacl_a_0046210(307-324)Online publication date: 25-Mar-2022
  • (2022)Fair ranking: a critical review, challenges, and future directionsProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency10.1145/3531146.3533238(1929-1942)Online publication date: 21-Jun-2022
  • (2021)Bridging Machine Learning and Mechanism Design towards Algorithmic FairnessProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency10.1145/3442188.3445912(489-503)Online publication date: 3-Mar-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
EC '16: Proceedings of the 2016 ACM Conference on Economics and Computation
July 2016
874 pages
ISBN:9781450339360
DOI:10.1145/2940716
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 July 2016

Check for updates

Author Tags

  1. explore and exploit
  2. incentive compatibility
  3. social learning

Qualifiers

  • Abstract

Funding Sources

Conference

EC '16
Sponsor:
EC '16: ACM Conference on Economics and Computation
July 24 - 28, 2016
Maastricht, The Netherlands

Acceptance Rates

EC '16 Paper Acceptance Rate 80 of 242 submissions, 33%;
Overall Acceptance Rate 664 of 2,389 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)7
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2022)Designing an Automatic Agent for Repeated Language–based Persuasion GamesTransactions of the Association for Computational Linguistics10.1162/tacl_a_0046210(307-324)Online publication date: 25-Mar-2022
  • (2022)Fair ranking: a critical review, challenges, and future directionsProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency10.1145/3531146.3533238(1929-1942)Online publication date: 21-Jun-2022
  • (2021)Bridging Machine Learning and Mechanism Design towards Algorithmic FairnessProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency10.1145/3442188.3445912(489-503)Online publication date: 3-Mar-2021
  • (2020)Fiduciary banditsProceedings of the 37th International Conference on Machine Learning10.5555/3524938.3524987(518-527)Online publication date: 13-Jul-2020
  • (2020)Bayesian Incentive-Compatible Bandit ExplorationOperations Research10.1287/opre.2019.194968:4(1132-1161)Online publication date: 1-Jul-2020
  • (2020)Personalised ratingAutonomous Agents and Multi-Agent Systems10.1007/s10458-020-09479-234:2Online publication date: 1-Oct-2020
  • (2019)Rethinking search engines and recommendation systemsCommunications of the ACM10.1145/334092262:12(66-75)Online publication date: 21-Nov-2019
  • (2019)Optimal Algorithm for Bayesian Incentive-Compatible ExplorationProceedings of the 2019 ACM Conference on Economics and Computation10.1145/3328526.3329581(135-151)Online publication date: 17-Jun-2019
  • (2019)Social Learning and the Innkeeper's ChallengeProceedings of the 2019 ACM Conference on Economics and Computation10.1145/3328526.3329569(153-170)Online publication date: 17-Jun-2019
  • (2019)Bayesian Exploration with Heterogeneous AgentsThe World Wide Web Conference10.1145/3308558.3313649(751-761)Online publication date: 13-May-2019
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media