[edit]
On a possibility of gradual model-learning
Proceedings of the 10th International Conference on Probabilistic Graphical Models, PMLR 138:245-256, 2020.
Abstract
In this paper, the term of gradual learning describes the process, in which an $n$-dimensional model is constructed in $n$ steps; each step increases the dimensionality of the constructed model by one. The approach is explained using the apparatus of compositional models since its algebraic properties seem to serve the purpose best. The paper shows also the equivalence of compositional models and Bayesian networks, and thus the paper gives a hint that the approach applies to the graphical model as well.