Abstract
The NeuroSAT neural network architecture was introduced in [37] for predicting properties of propositional formulae. When trained to predict the satisfiability of toy problems, it was shown to find solutions and unsatisfiable cores on its own. However, the authors saw “no obvious path” to using the architecture to improve the state-of-the-art. In this work, we train a simplified NeuroSAT architecture to directly predict the unsatisfiable cores of real problems. We modify several state-of-the-art SAT solvers to periodically replace their variable activity scores with NeuroSAT’s prediction of how likely the variables are to appear in an unsatisfiable core. The modified MiniSat solves 10% more problems on SATCOMP-2018 within the standard 5,000 second timeout than the original does. The modified Glucose solves 11% more problems than the original, while the modified Z3 solves 6% more. The gains are even greater when the training is specialized for a specific distribution of problems; on a benchmark of hard problems from a scheduling domain, the modified Glucose solves 20% more problems than the original does within a one-hour timeout. Our results demonstrate that NeuroSAT can provide effective guidance to high-performance SAT solvers on real problems.
D. Selsam—This paper describes work performed while the first author was at Microsoft Research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
In MiniSat, this involves setting the activity vector to these values, resetting the variable increment to 1.0, and rebuilding the order-heap.
- 2.
- 3.
- 4.
- 5.
References
Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation OSDI 16, pp. 265–283 (2016)
Balcan, M., Dick, T., Sandholm, T., Vitercik, E.: Learning to branch. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018. JMLR Workshop and Conference Proceedings, vol. 80, pp. 353–362. JMLR.org (2018). http://proceedings.mlr.press/v80/balcan18a.html
Balunovic, M., Bielik, P., Vechev, M.T.: Learning to solve SMT formulas. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada. pp. 10338–10349 (2018). http://papers.nips.cc/paper/8233-learning-to-solve-smt-formulas
Biere, A., Fröhlich, A.: Evaluating CDCL variable scoring schemes. In: Heule, M., Weaver, S. (eds.) SAT 2015. LNCS, vol. 9340, pp. 405–422. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24318-4_29
Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185. IOS Press, Amsterdam (2009)
de Moura, L., Passmore, G.O.: The Strategy Challenge in SMT Solving. In: Bonacina, M.P., Stickel, M.E. (eds.) Automated Reasoning and Mathematics. LNCS (LNAI), vol. 7788, pp. 15–44. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36675-8_2
Devlin, J., Uesato, J., Bhupatiraju, S., Singh, R., Mohamed, A.r., Kohli, P.: RobustFill: neural program learning under noisy I/O. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 990–998. JMLR. org (2017)
Heule, M., van Maaren, H.: Look-ahead based SAT solvers. In: Biere et al. [5], pp. 155–184. https://doi.org/10.3233/978-1-58603-929-5-155
Heule, M.J.: Schur number five. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Heule, M.J.H., Kullmann, O., Biere, A.: Cube-and-conquer for satisfiability. Handbook of Parallel Constraint Reasoning, pp. 31–59. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-63516-3_2
Heule, M.J.H., Kullmann, O., Marek, V.W.: Solving and verifying the boolean pythagorean triples problem via cube-and-conquer. In: Creignou, N., Le Berre, D. (eds.) SAT 2016. LNCS, vol. 9710, pp. 228–245. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40970-2_15
Proceedings of sat competition 2018; solver and benchmark descriptions (2018). http://hdl.handle.net/10138/237063
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition. IEEE Sig. Process. Mag. 29 (2012)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hoder, K., Reger, G., Suda, M., Voronkov, A.: Selecting the selection. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 313–329. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40229-1_22
Huang, D., Dhariwal, P., Song, D., Sutskever, I.: GamePad: A learning environment for theorem proving. arXiv preprint arXiv:1806.00608 (2018)
Irving, G., Szegedy, C., Alemi, A.A., Een, N., Chollet, F., Urban, J.: DeepMath-deep sequence models for premise selection. In: Advances in Neural Information Processing Systems, pp. 2235–2243 (2016)
Kaliszyk, C., Chollet, F., Szegedy, C.: HolStep: A machine learning dataset for higher-order logic theorem proving. arXiv preprint arXiv:1703.00426 (2017)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Knuth, D.E.: Estimating the efficiency of backtrack programs. Math. Comput. 29(129), 122–136 (1975)
Knuth, D.E.: The Art of Computer Programming, vol. 4, Fascicle 6: Satisfiability (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)
Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)
Kullmann, O.: Fundaments of branching heuristics. In: Biere et al. [5], pp. 205–244. https://doi.org/10.3233/978-1-58603-929-5-205
Lawler, E.L., Wood, D.E.: Branch-and-bound methods: a survey. Oper. Res. 14(4), 699–719 (1966)
Liang, J., K., H.G.V., Poupart, P., Czarnecki, K., Ganesh, V.: An empirical study of branching heuristics through the lens of global learning rate. In: Lang, J. (ed.) Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, 13–19 July 2018, Stockholm, Sweden, pp. 5319–5323. ijcai.org (2018). https://doi.org/10.24963/ijcai.2018/745, http://www.ijcai.org/proceedings/2018/
Loos, S.M., Irving, G., Szegedy, C., Kaliszyk, C.: Deep network guided proof search. In: Eiter, T., Sands, D. (eds.) LPAR-21, 21st International Conference on Logic for Programming, Artificial Intelligence and Reasoning, Maun, Botswana, May 7–12, 2017. EPiC Series in Computing, vol. 46, pp. 85–105. EasyChair (2017). http://www.easychair.org/publications/paper/340345
Marques-Silva, J.P., Sakallah, K.A.: GRASP: a search algorithm for propositional satisfiability. IEEE Transact. Comput. 48(5), 506–521 (1999)
Mijnders, S., de Wilde, B., Heule, M.: Symbiosis of search and heuristics for random 3-sat. CoRR abs/1402.4455 (2010)
Moritz, P., et al.: Ray: a distributed framework for emerging \(\{\)AI\(\}\) applications. In: 13th USENIX Symposium on Operating Systems Design and Implementation OSDI 18), pp. 561–577 (2018)
Moskewicz, M.W., Madigan, C.F., Zhao, Y., Zhang, L., Malik, S.: Chaff: Engineering an efficient sat solver. In: Proceedings of the 38th annual Design Automation Conference. pp. 530–535. ACM (2001)
Oh, C.: Between SAT and UNSAT: The fundamental difference in CDCL SAT. In: Heule, M., Weaver, S. (eds.) SAT 2015. LNCS, vol. 9340, pp. 307–323. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24318-4_23
Parisotto, E., Mohamed, A.r., Singh, R., Li, L., Zhou, D., Kohli, P.: Neuro-symbolic program synthesis. arXiv preprint arXiv:1611.01855 (2016)
Ryan, L.: Efficient algorithms for clause-learning SAT solvers, Masters thesis (2004)
Schulz, S.: E-A brainiac theorem prover. Ai Communicat. 15(2, 3), 111–126 (2002)
Schulz, S.: We know (nearly) nothing! but can we learn? In: Reger, G., Traytel, D. (eds.) ARCADE 2017, 1st International Workshop on Automated Reasoning: Challenges, Applications, Directions, Exemplary Achievements, Gothenburg, Sweden, 6th August 2017. EPiC Series in Computing, vol. 51, pp. 29–32. EasyChair (2017). http://www.easychair.org/publications/paper/6kgF
Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., Dill, D.L.: Learning a SAT solver from single-bit supervision. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=HJMC_iA5tm
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)
Sutcliffe, G.: The CADE ATP system competition - CASC. AI Mag. 37(2), 99–101 (2016)
Tammet, T.: Gandalf. J. Autom. Reason. 18(2), 199–204 (1997). https://doi.org/10.1023/A:1005887414560
Urban, J., Sutcliffe, G., Pudlák, P., Vyskočil, J.: MaLARea SG1 - Machine learner for automated reasoning with semantic guidance. In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 441–456. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-71070-7_37
Wang, M., Tang, Y., Wang, J., Deng, J.: Premise selection for theorem proving by deep graph embedding. In: Advances in Neural Information Processing Systems, pp. 2786–2796 (2017)
Wetzler, N., Heule, M.J.H., Hunt, W.A.: DRAT-trim: Efficient checking and trimming using expressive clausal proofs. In: Sinz, C., Egly, U. (eds.) SAT 2014. LNCS, vol. 8561, pp. 422–429. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-09284-3_31
Whalen, D.: Holophrasm: a neural automated theorem prover for higher-order logic. arXiv preprint arXiv:1608.02644 (2016)
Wu, Y., et al.: Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)
Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Satzilla: Portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32, 565–606 (2008). https://doi.org/10.1613/jair.2490
Acknowledgments
We thank Percy Liang, David L. Dill, and Marijn J. H. Heule for helpful discussions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Selsam, D., Bjørner, N. (2019). Guiding High-Performance SAT Solvers with Unsat-Core Predictions. In: Janota, M., Lynce, I. (eds) Theory and Applications of Satisfiability Testing – SAT 2019. SAT 2019. Lecture Notes in Computer Science(), vol 11628. Springer, Cham. https://doi.org/10.1007/978-3-030-24258-9_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-24258-9_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24257-2
Online ISBN: 978-3-030-24258-9
eBook Packages: Computer ScienceComputer Science (R0)