Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Online decision making in crowdsourcing markets: theoretical challenges

Published: 25 November 2014 Publication History

Abstract

Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.
As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.

References

[1]
Abraham, I., Alonso, O., Kandylas, V., and Slivkins, A. 2013. Adaptive crowdsourcing algorithms for the bandit survey problem. In 26th Conf. on Learning Theory (COLT).
[2]
Babaioff, M., Dughmi, S., Kleinberg, R., and Slivkins, A. 2012. Dynamic pricing with limited supply. In 13th ACM Conf. on Electronic Commerce (EC).
[3]
Babaioff, M., Feldman, M., and Nisan, N. 2006. Combinatorial agency. In 7th ACM Conf. on Electronic Commerce (EC).
[4]
Badanidiyuru, A., Kleinberg, R., and Singer, Y. 2012. Learning on a budget: posted price mechanisms for online procurement. In 13th ACM Conf. on Electronic Commerce (EC). 128--145.
[5]
Badanidiyuru, A., Kleinberg, R., and Slivkins, A. 2013a. Bandits with knapsacks. In 54th IEEE Symp. on Foundations of Computer Science (FOCS).
[6]
Badanidiyuru, A., Kleinberg, R., and Slivkins, A. 2013b. Bandits with knapsacks: Dynamic procurement for crowdsourcing. In The 3rd Workshop on Social Computing and User Generated Content, co-located with ACM EC 2013.
[7]
Bergemann, D. and Välimäki, J. 2006. Bandit Problems. In The New Palgrave Dictionary of Economics, 2nd ed., S. Durlauf and L. Blume, Eds. Macmillan Press.
[8]
Besbes, O. and Zeevi, A. 2009. Dynamic pricing without knowing the demand function: Risk bounds and near-optimal algorithms. Operations Research 57, 1407--1420.
[9]
Besbes, O. and Zeevi, A. J. 2012. Blind network revenue management. Operations Research 60, 6, 1537--1550.
[10]
Borodin, A. and El-Yaniv, R. 1998. Online computation and competitive analysis. Cambridge University Press.
[11]
Bubeck, S. and Cesa-Bianchi, N. 2012. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Foundations and Trends in Machine Learning 5, 1, 1--122.
[12]
Bubeck, S., Munos, R., Stoltz, G., and Szepesvari, C. 2011. Online Optimization in X-Armed Bandits. J. of Machine Learning Research (JMLR) 12, 1587--1627.
[13]
Buchbinder, N. and Naor, J. 2009. The design of competitive online algorithms via a primaldual approach. Foundations and Trends in Theoretical Computer Science 3, 2--3, 93--263.
[14]
Cesa-Bianchi, N. and Lugosi, G. 2006. Prediction, learning, and games. Cambridge Univ. Press.
[15]
Chawla, S., Hartline, J. D., Malec, D. L., and Sivan, B. 2010. Multi-parameter mechanism design and sequential posted pricing. In ACM Symp. on Theory of Computing (STOC). 311--320.
[16]
Chen, X., Lin, Q., and Zhou, D. 2013. Optimistic knowledge gradient for optimal budget allocation in crowdsourcing. In 30th Intl. Conf. on Machine Learning (ICML).
[17]
Conitzer, V. and Garera, N. 2006. Online learning algorithms for online principal-agent problems (and selling goods online). In 23rd Intl. Conf. on Machine Learning (ICML).
[18]
Crammer, K., Kearns, M., and Wortman, J. 2005. Learning from data of variable quality. In 19th Advances in Neural Information Processing Systems (NIPS).
[19]
Crammer, K., Kearns, M., and Wortman, J. 2008. Learning from multiple sources. Journal of Machine Learning Research 9, 1757--1774.
[20]
Dawid, P. and Skene, A. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society, Series C (Applied Statistics) 28, 1, 20--28.
[21]
Dekel, O. and Shamir, O. 2009. Vox populi: Collecting high-quality labels from a crowd. In 22nd Conf. on Learning Theory (COLT).
[22]
Dellarocas, C. 2005. Reputation mechanism design in online trading environments with pure moral hazard. Information Systems Research 16, 2, 209--230.
[23]
Devanur, N., Jain, K., Sivan, B., and Wilkens, C. 2011. Near optimal online algorithms and fast approximation algorithms for resource allocation problems. In 12th ACM Conf. on Electronic Commerce (EC).
[24]
Friedman, E., Resnick, P., and Sami, R. 2007. Manipulation-resistant reputation systems. In Algorithmic Game Theory, N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, Eds. Cambridge University Press.
[25]
Ghosh, A. and Hummel, P. 2013. Learning and incentives in user-generated content: Multi-armed bandits with endogenous arms. In 4th Innovations in Theoretical Computer Science Conference (ITCS).
[26]
Gittins, J., Glazebrook, K., and Weber, R. 2011. Multi-Armed Bandit Allocation Indices. John Wiley & Sons.
[27]
Ho, C.-J., Jabbari, S., and Vaughan, J. W. 2013. Adaptive task assignment for crowdsourced classification. In 30th Intl. Conf. on Machine Learning (ICML).
[28]
Ho, C.-J., Slivkins, A., and Vaughan, J. W. 2013. Adaptive contract design for crowdsourcing. Working Paper. Preliminary version to appear in the NIPS '13 Workshop on Crowdsourcing: Theory, Algorithms, and Applications.
[29]
Ho, C.-J. and Vaughan, J. W. 2012. Online task assignment in crowdsourcing markets. In 26th AAAI Conference on Artificial Intelligence (AAAI).
[30]
Ho, C.-J., Zhang, Y., Vaughan, J. W., and van der Schaar, M. 2012. Towards social norm design for crowdsourcing markets. In 4th Human Computation Workshop (HCOMP).
[31]
Ipeirotis, P. G., Provost, F., Sheng, V. S., and Wang, J. 2013. Repeated labeling using multiple noisy labelers. Data Mining and Knowledge Discovery.
[32]
Kalyanaram, G. and Winer, R. S. 1995. Empirical generalizations from reference price research. Marketing Science 14, 3.
[33]
Kandori, M. 1992. Social norms and community enforcement. The Review of Economic Studies 59, 1, 63--80.
[34]
Karger, D., Oh, S., and Shah, D. 2011. Iterative learning for reliable crowdsourcing systems. In 25th Advances in Neural Information Processing Systems (NIPS).
[35]
Karger, D., Oh, S., and Shah, D. 2013. Budget-optimal task allocation for reliable crowdsourcing systems. To appear.
[36]
Kittur, A., Nickerson, J. V., Bernstein, M. S., Gerber, E. M., Shaw, A., Zimmerman, J., Lease, M., and Horton, J. J. 2013. The future of crowd work. In 16th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW).
[37]
Kleinberg, R. and Leighton, T. 2003. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In 44th IEEE Symp. on Foundations of Computer Science (FOCS). 594--605.
[38]
Kleinberg, R., Slivkins, A., and Upfal, E. 2008. Multi-Armed Bandits in Metric Spaces. In 40th ACM Symp. on Theory of Computing (STOC). 681--690.
[39]
Laffont, J.-J. and Martimort, D. 2002. The Theory of Incentives: The Principal-Agent Model. Princeton University Press.
[40]
Lasecki, W. S., Miller, C. D., Kushalnagar, R. S., and Bigham, J. P. 2013. Legion scribe: real-time captioning by the non-experts. In Intl. Cross-Disciplinary Conf. on Web Accessibility (W4A). 22.
[41]
Mao, A., Procaccia, A. D., and Chen, Y. 2013. Better human computation through principled voting. In 27th AAAI Conference on Artificial Intelligence (AAAI).
[42]
Mason, W. and Suri, S. 2012. Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods. To appear.
[43]
Mason, W. and Watts, D. J. 2009. Financial incentives and the "performance of crowds". In 1st Human Computation Workshop (HCOMP).
[44]
Misra, S., Nair, H. S., and Daljord, O. 2012. Homogenous contracts for heterogeneous agents: Aligning salesforce composition and compensation. Working Paper.
[45]
Nasiry, J. and Popescu, I. 2011. Dynamic pricing with loss averse consumers and peak-end anchoring. Operations Research 59, 6, 1361--1368.
[46]
Noronha, J., Hysen, E., Zhang, H., and Gajos, K. Z. 2011. Platemate: crowdsourcing nutritional analysis from food photographs. In 24th ACM Symp. on User Interface Software and Technology (UIST). 1--12.
[47]
Pfeiffer, T., Gao, X. A., Chen, Y., Mao, A., and Rand, D. G. 2012. Adaptive polling for information aggregation. In 26th AAAI Conference on Artificial Intelligence (AAAI).
[48]
Popescu, I. and Wu, Y. 2007. Dynamic pricing strategies with reference effects. Operations Research 55, 3, 413--429.
[49]
Putler, D. S. 1992. Incorporating reference price effects into a theory of consumer choice. Marketing Science 11, 3, 287--309.
[50]
Resnick, P., Zeckhauser, R., Friedman, E., and Kuwabara, K. 2000. Reputation systems. 43, 12, 45--48.
[51]
Sannikov, Y. 2008. A continuous-time version of the principal-agent problem. In The Review of Economics Studies.
[52]
Sannikov, Y. 2012. Contracts: The theory of dynamic principal-agent relationships and the continuous-time approach. In 10th World Congress of the Econometric Society.
[53]
Sheng, V., Provost, F., and Ipeirotis, P. 2008. Get another label? Improving data quality using multiple, noisy labelers. In KDD.
[54]
Singer, Y. and Mittal, M. 2013. Pricing mechanisms for crowdsourcing markets. In Intl. World Wide Web Conf. (WWW).
[55]
Singla, A. and Krause, A. 2013. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In 22nd Intl. World Wide Web Conf. (WWW). 1167--1178.
[56]
Slivkins, A. 2011. Contextual Bandits with Similarity Information. In 24th Conf. on Learning Theory (COLT). To appear in J. of Machine Learning Research (JMLR), 2013.
[57]
Thaler, R. H. 2008. Mental accounting and consumer choice. Marketing Science 27, 1, 15--25.
[58]
Tran-Thanh, L., Stein, S., Rogers, A., and Jennings, N. R. 2012. Efficient crowdsourcing of unknown experts using multi-armed bandits. In 20th European Conf. on Artificial Intelligence (ECAI). 768--773.
[59]
Tversky, A. and Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science 185, 4157, 1124--1131.
[60]
Tversky, A. and Kahneman, D. 1991. Loss aversion in riskless choice: A reference-dependent model. The Quarterly J. of Economics 106, 4, 1039--1061.
[61]
VizWiz. vizwiz.org: crowdsourcing to answer visual questions for the visually impaired.
[62]
Wang, J., Ipeirotis, P. G., and Provost, F. 2013. Quality-based pricing for crowdsourced workers. NYU-CBA Working Paper CBA-13-06.
[63]
Welinder, P., Branson, S., Belongie, S., and Perona, P. 2010. The multidimensional wisdom of crowds. In 24th Advances in Neural Information Processing Systems (NIPS). 2424--2432.
[64]
Williams, N. 2009. On dynamic principal-agent problems in continuous time. Working Paper.
[65]
Yin, M., Chen, Y., and Sun, Y.-A. 2013. The effects of performance-contingent financial incentives in online labor markets. In 27th AAAI Conference on Artificial Intelligence (AAAI).
[66]
Zaidan, O. and Callison-Burch, C. 2011. Crowdsourcing translation: Professional quality from non-professionals. In 49th Annual Meeting of the Assn. for Computational Linguistics: Human Language Technologies (ACL-HLT). 1220--1229.
[67]
Zhang, H., Law, E., Miller, R., Gajos, K., Parkes, D. C., and Horvitz, E. 2012. Human computation tasks with global constraints. In CHI. 217--226.
[68]
Zhang, Y. and van der Schaar, M. 2012. Reputation-based incentive protocols in crowdsourcing applications. In 31st IEEE International Conference on Computer Communications (IEEE Infocom).

Cited By

View all
  • (2024)B2-Bandit: Budgeted Pricing With Blocking Constraints for Metaverse Crowdsensing Under UncertaintyIEEE Journal on Selected Areas in Communications10.1109/JSAC.2023.334539642:3(724-736)Online publication date: 1-Mar-2024
  • (2024)A truthful mechanism for time-bound tasks in IoT-based crowdsourcing with zero budgetMultimedia Tools and Applications10.1007/s11042-023-16015-383:4(9873-9892)Online publication date: 1-Jan-2024
  • (2023)Coalition Formation in Sequential Decision-Making under UncertaintyProceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems10.5555/3545946.3599136(2955-2957)Online publication date: 30-May-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM SIGecom Exchanges
ACM SIGecom Exchanges  Volume 12, Issue 2
December 2013
53 pages
EISSN:1551-9031
DOI:10.1145/2692359
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 November 2014
Published in SIGECOM Volume 12, Issue 2

Check for updates

Author Tags

  1. crowdsourcing
  2. dynamic pricing
  3. multi-armed bandits
  4. reputation systems

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)4
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)B2-Bandit: Budgeted Pricing With Blocking Constraints for Metaverse Crowdsensing Under UncertaintyIEEE Journal on Selected Areas in Communications10.1109/JSAC.2023.334539642:3(724-736)Online publication date: 1-Mar-2024
  • (2024)A truthful mechanism for time-bound tasks in IoT-based crowdsourcing with zero budgetMultimedia Tools and Applications10.1007/s11042-023-16015-383:4(9873-9892)Online publication date: 1-Jan-2024
  • (2023)Coalition Formation in Sequential Decision-Making under UncertaintyProceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems10.5555/3545946.3599136(2955-2957)Online publication date: 30-May-2023
  • (2023)Online Coalitional Skill FormationProceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems10.5555/3545946.3598676(494-503)Online publication date: 30-May-2023
  • (2023)Toward Zero-Determinant Strategies for Optimal Decision Making in Crowdsourcing SystemsMathematics10.3390/math1105115311:5(1153)Online publication date: 26-Feb-2023
  • (2023)A Generative Answer Aggregation Model for Sentence-Level Crowdsourcing TasksIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2022.314282135:4(3299-3312)Online publication date: 1-Apr-2023
  • (2023)Multi-level Federated Learning for Industry 4.0 - A Crowdsourcing ApproachProcedia Computer Science10.1016/j.procs.2022.12.238217:C(423-435)Online publication date: 1-Jan-2023
  • (2022)Stochastic continuum-armed bandits with additive models: Minimax regrets and adaptive algorithmThe Annals of Statistics10.1214/22-AOS218250:4Online publication date: 1-Aug-2022
  • (2022)Stochastic Sequential Allocations for Creative CrowdsourcingProduction and Operations Management10.1111/poms.1357331:2(697-714)Online publication date: 1-Feb-2022
  • (2022)Fair compensation of crowdsourcing work: the problem of flat ratesBehaviour & Information Technology10.1080/0144929X.2022.215056442:16(2871-2892)Online publication date: 28-Nov-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media