Nothing Special   »   [go: up one dir, main page]

Skip to main content

Enhanced Bayesian Optimization via Preferential Modeling of Abstract Properties

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Abstract

Experimental (design) optimization is a key driver in designing and discovering new products and processes. Bayesian Optimization (BO) is an effective tool for optimizing expensive and black-box experimental design processes. While Bayesian optimization is a principled data-driven approach to experimental optimization, it learns everything from scratch and could greatly benefit from the expertise of its human (domain) experts who often reason about systems at different abstraction levels using physical properties that are not necessarily directly measured (or measurable). In this paper, we propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into the surrogate modeling to further boost the performance of BO. We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments. We discuss the convergence behavior of our proposed framework. Our experimental results involving synthetic functions and real-world datasets show the superiority of our method against the baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The supplementary material of BOAP is accessible online at the following link:

    https://doi.org/10.1007/978-3-031-70365-2_14.

References

  1. Martinez-Cantin, R.: BayesOpt: a Bayesian optimization library for nonlinear optimization, experimental design and bandits. J. Mach. Learn. Res. 15(1), 3735–3739 (2014)

    MathSciNet  Google Scholar 

  2. Greenhill, S., Rana, S., Gupta, S., Vellanki, P., Venkatesh, S.: Bayesian optimization for adaptive experimental design: a review. IEEE access 8, 13937–13948 (2020)

    Article  Google Scholar 

  3. Srinivas, N., Krause, A., Kakade, S.M., Seeger, M.W.: Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory 58(5), 3250–3265 (2012)

    Article  MathSciNet  Google Scholar 

  4. Chowdhury, S.R., Gopalan, A.: On kernelized multi-armed bandits. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 844–853. PMLR, International Convention Centre, Sydney, Australia (Aug 2017)

    Google Scholar 

  5. Swersky, K.: Improving Bayesian optimization for machine learning using expert priors. University of Toronto (Canada) (2017)

    Google Scholar 

  6. Venkatesh, A.K.A., Rana, S., Li, C., Gupta, S., Shilton, A., Venkatesh, S.: Bayesian optimization for objective functions with varying smoothness. In: Australasian Joint Conference on Artificial Intelligence, pp. 460–472 (2019)

    Google Scholar 

  7. Li, C., et al.: Accelerating experimental design by incorporating experimenter hunches. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 257–266 (2018). https://doi.org/10.1109/ICDM.2018.00041

  8. Hvarfner, C., Stoll, D., Souza, A., Lindauer, M., Hutter, F., Nardi, L.: \(\pi \)BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization. arXiv preprint arXiv:2204.11051 (2022)

  9. Venkatesh, A.K.A., Rana, S., Shilton, A., Venkatesh, S.: Human-AI Collaborative Bayesian optimization. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  10. Nguyen, Q.P., Tay, S., Low, B.K.H., Jaillet, P.: Top-k ranking Bayesian optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9135–9143 (2021)

    Google Scholar 

  11. Williams, C.K., Rasmussen, C.E.: Gaussian Processes For Machine Learning, vol. 2. MIT press Cambridge, MA (2006)

    Google Scholar 

  12. Brochu, E., Cora, V.M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599 (2010)

  13. Frazier, P.I.: A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 (2018)

  14. Kushner, H.J.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 86(1), 97–106 (03 1964). https://doi.org/10.1115/1.3653121

  15. Shah, A., Wilson, A., Ghahramani, Z.: Student-t processes as alternatives to Gaussian processes. In: Artificial intelligence and statistics, pp. 877–885 (2014)

    Google Scholar 

  16. Zhang, Z., Si, X., Hu, C., Lei, Y.: Degradation data analysis and remaining useful life estimation: a review on Wiener-process-based methods. Eur. J. Oper. Res. 271(3), 775–796 (2018)

    Article  MathSciNet  Google Scholar 

  17. Mockus, J., Tiesis, V., Zilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Towards Global Optimization, vol. 2, pp. 117–129 (September 1978)

    Google Scholar 

  18. Thompson, W.R.: On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25(3–4), 285–294 (1933)

    Article  Google Scholar 

  19. Kahneman, D., Tversky, A.: Prospect theory: An analysis of decision under risk. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 99–127. World Scientific (2013)

    Google Scholar 

  20. Siroker, D., Koomen, P.: A/B testing: The most powerful way to turn clicks into customers. John Wiley & Sons (2015)

    Google Scholar 

  21. Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.): The Adaptive Web: Methods and Strategies of Web Personalization. Springer Berlin Heidelberg, Berlin, Heidelberg (2007)

    Google Scholar 

  22. Herbrich, R., Minka, T., Graepel, T.: TrueSkill: a Bayesian skill rating system. In: Advances in Neural Information Processing Systems 19 (2006)

    Google Scholar 

  23. Chu, W., Ghahramani, Z.: Preference learning with Gaussian processes. In: Proceedings of the 22nd International Conference on Machine learning, pp. 137–144 (2005)

    Google Scholar 

  24. González, J., Dai, Z., Damianou, A., Lawrence, N.D.: Preferential Bayesian optimization. In: International Conference on Machine Learning, pp. 1282–1291. PMLR (2017)

    Google Scholar 

  25. Mikkola, P., Todorović, M., Järvi, J., Rinke, P., Kaski, S.: Projective preferential Bayesian optimization. In: International Conference on Machine Learning, pp. 6884–6892. PMLR (2020)

    Google Scholar 

  26. Benavoli, A., Azzimonti, D., Piga, D.: Preferential Bayesian optimization with skew Gaussian processes. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1842–1850 (2021)

    Google Scholar 

  27. Astudillo, R., Frazier, P.: Multi-attribute Bayesian optimization with interactive preference learning. In: International Conference on Artificial Intelligence and Statistics, pp. 4496–4507. PMLR (2020)

    Google Scholar 

  28. Huang, D., Filstroff, L., Mikkola, P., Zheng, R., Kaski, S.: Bayesian optimization augmented with actively elicited expert knowledge. arXiv preprint arXiv:2208.08742 (2022)

  29. Thurstone, L.L.: A law of comparative judgment. In: Scaling, pp. 81–92. Routledge (2017)

    Google Scholar 

  30. Kandasamy, K., Krishnamurthy, A., Schneider, J., Póczos, B.: Parallelised Bayesian optimisation via thompson sampling. In: International Conference on Artificial Intelligence and Statistics, pp. 133–142 (2018)

    Google Scholar 

  31. Neal, R.M.: Bayesian learning for neural networks, vol. 118. Springer Science & Business Media (2012)

    Google Scholar 

  32. Surjanovic, S., Bingham, D.: Virtual library of simulation experiments: Test Functions and Datasets (2017). http://www.sfu.ca/~ssurjano. Accessed 10 Apr 2024

  33. Duquesnoy, M., Lombardo, T., Chouchane, M., Primo, E.N., Franco, A.A.: Data-driven assessment of electrode calendering process by combining experimental results, in silico mesostructures generation and machine learning. J. Power Sources 480, 229103 (2020)

    Article  Google Scholar 

  34. Drakopoulas, S.X., et al.: Formulation and manufacturing optimization of Lithium-ion graphite-based electrodes via machine learning. Cell Reports Phys. Sci. 2(12), 100683 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

This research was partially supported by the Australian Government through the Australian Research Council’s Discovery Project funding scheme (project DP210102798). The views expressed herein are those of the authors and are not necessarily those of the Australian Government or Australian Research Council.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. V. Arun Kumar .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

Prof. Svetha Venkatesh has received research grants from Australian Research Council’s Discovery Project funding scheme.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 915 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Arun Kumar, A.V., Shilton, A., Gupta, S., Rana, S., Greenhill, S., Venkatesh, S. (2024). Enhanced Bayesian Optimization via Preferential Modeling of Abstract Properties. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14946. Springer, Cham. https://doi.org/10.1007/978-3-031-70365-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70365-2_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70364-5

  • Online ISBN: 978-3-031-70365-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics