Abstract
Experimental (design) optimization is a key driver in designing and discovering new products and processes. Bayesian Optimization (BO) is an effective tool for optimizing expensive and black-box experimental design processes. While Bayesian optimization is a principled data-driven approach to experimental optimization, it learns everything from scratch and could greatly benefit from the expertise of its human (domain) experts who often reason about systems at different abstraction levels using physical properties that are not necessarily directly measured (or measurable). In this paper, we propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into the surrogate modeling to further boost the performance of BO. We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments. We discuss the convergence behavior of our proposed framework. Our experimental results involving synthetic functions and real-world datasets show the superiority of our method against the baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The supplementary material of BOAP is accessible online at the following link:
References
Martinez-Cantin, R.: BayesOpt: a Bayesian optimization library for nonlinear optimization, experimental design and bandits. J. Mach. Learn. Res. 15(1), 3735–3739 (2014)
Greenhill, S., Rana, S., Gupta, S., Vellanki, P., Venkatesh, S.: Bayesian optimization for adaptive experimental design: a review. IEEE access 8, 13937–13948 (2020)
Srinivas, N., Krause, A., Kakade, S.M., Seeger, M.W.: Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory 58(5), 3250–3265 (2012)
Chowdhury, S.R., Gopalan, A.: On kernelized multi-armed bandits. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 844–853. PMLR, International Convention Centre, Sydney, Australia (Aug 2017)
Swersky, K.: Improving Bayesian optimization for machine learning using expert priors. University of Toronto (Canada) (2017)
Venkatesh, A.K.A., Rana, S., Li, C., Gupta, S., Shilton, A., Venkatesh, S.: Bayesian optimization for objective functions with varying smoothness. In: Australasian Joint Conference on Artificial Intelligence, pp. 460–472 (2019)
Li, C., et al.: Accelerating experimental design by incorporating experimenter hunches. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 257–266 (2018). https://doi.org/10.1109/ICDM.2018.00041
Hvarfner, C., Stoll, D., Souza, A., Lindauer, M., Hutter, F., Nardi, L.: \(\pi \)BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization. arXiv preprint arXiv:2204.11051 (2022)
Venkatesh, A.K.A., Rana, S., Shilton, A., Venkatesh, S.: Human-AI Collaborative Bayesian optimization. In: Advances in Neural Information Processing Systems (2022)
Nguyen, Q.P., Tay, S., Low, B.K.H., Jaillet, P.: Top-k ranking Bayesian optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9135–9143 (2021)
Williams, C.K., Rasmussen, C.E.: Gaussian Processes For Machine Learning, vol. 2. MIT press Cambridge, MA (2006)
Brochu, E., Cora, V.M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599 (2010)
Frazier, P.I.: A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 (2018)
Kushner, H.J.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 86(1), 97–106 (03 1964). https://doi.org/10.1115/1.3653121
Shah, A., Wilson, A., Ghahramani, Z.: Student-t processes as alternatives to Gaussian processes. In: Artificial intelligence and statistics, pp. 877–885 (2014)
Zhang, Z., Si, X., Hu, C., Lei, Y.: Degradation data analysis and remaining useful life estimation: a review on Wiener-process-based methods. Eur. J. Oper. Res. 271(3), 775–796 (2018)
Mockus, J., Tiesis, V., Zilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Towards Global Optimization, vol. 2, pp. 117–129 (September 1978)
Thompson, W.R.: On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25(3–4), 285–294 (1933)
Kahneman, D., Tversky, A.: Prospect theory: An analysis of decision under risk. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 99–127. World Scientific (2013)
Siroker, D., Koomen, P.: A/B testing: The most powerful way to turn clicks into customers. John Wiley & Sons (2015)
Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.): The Adaptive Web: Methods and Strategies of Web Personalization. Springer Berlin Heidelberg, Berlin, Heidelberg (2007)
Herbrich, R., Minka, T., Graepel, T.: TrueSkill: a Bayesian skill rating system. In: Advances in Neural Information Processing Systems 19 (2006)
Chu, W., Ghahramani, Z.: Preference learning with Gaussian processes. In: Proceedings of the 22nd International Conference on Machine learning, pp. 137–144 (2005)
González, J., Dai, Z., Damianou, A., Lawrence, N.D.: Preferential Bayesian optimization. In: International Conference on Machine Learning, pp. 1282–1291. PMLR (2017)
Mikkola, P., Todorović, M., Järvi, J., Rinke, P., Kaski, S.: Projective preferential Bayesian optimization. In: International Conference on Machine Learning, pp. 6884–6892. PMLR (2020)
Benavoli, A., Azzimonti, D., Piga, D.: Preferential Bayesian optimization with skew Gaussian processes. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1842–1850 (2021)
Astudillo, R., Frazier, P.: Multi-attribute Bayesian optimization with interactive preference learning. In: International Conference on Artificial Intelligence and Statistics, pp. 4496–4507. PMLR (2020)
Huang, D., Filstroff, L., Mikkola, P., Zheng, R., Kaski, S.: Bayesian optimization augmented with actively elicited expert knowledge. arXiv preprint arXiv:2208.08742 (2022)
Thurstone, L.L.: A law of comparative judgment. In: Scaling, pp. 81–92. Routledge (2017)
Kandasamy, K., Krishnamurthy, A., Schneider, J., Póczos, B.: Parallelised Bayesian optimisation via thompson sampling. In: International Conference on Artificial Intelligence and Statistics, pp. 133–142 (2018)
Neal, R.M.: Bayesian learning for neural networks, vol. 118. Springer Science & Business Media (2012)
Surjanovic, S., Bingham, D.: Virtual library of simulation experiments: Test Functions and Datasets (2017). http://www.sfu.ca/~ssurjano. Accessed 10 Apr 2024
Duquesnoy, M., Lombardo, T., Chouchane, M., Primo, E.N., Franco, A.A.: Data-driven assessment of electrode calendering process by combining experimental results, in silico mesostructures generation and machine learning. J. Power Sources 480, 229103 (2020)
Drakopoulas, S.X., et al.: Formulation and manufacturing optimization of Lithium-ion graphite-based electrodes via machine learning. Cell Reports Phys. Sci. 2(12), 100683 (2021)
Acknowledgements
This research was partially supported by the Australian Government through the Australian Research Council’s Discovery Project funding scheme (project DP210102798). The views expressed herein are those of the authors and are not necessarily those of the Australian Government or Australian Research Council.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
Prof. Svetha Venkatesh has received research grants from Australian Research Council’s Discovery Project funding scheme.
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Arun Kumar, A.V., Shilton, A., Gupta, S., Rana, S., Greenhill, S., Venkatesh, S. (2024). Enhanced Bayesian Optimization via Preferential Modeling of Abstract Properties. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14946. Springer, Cham. https://doi.org/10.1007/978-3-031-70365-2_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-70365-2_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70364-5
Online ISBN: 978-3-031-70365-2
eBook Packages: Computer ScienceComputer Science (R0)