MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
-
Updated
Oct 19, 2025 - Julia
MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
Code accompanying the NeurIPS 2025 paper "Sequential Monte Carlo for Policy Optimization in Continuous POMDPs".
A Julia package for solving POMDPs with belief compression. Part of the POMDPs.jl community.
Online solver based on Monte Carlo tree search for POMDPs with continuous state, action, and observation spaces.
The PO-UCT algorithm (aka POMCP) implemented in Julia
Source code for the papers: Real-Time Risky Fault-Chain Search using Time-Varying Graph RNNs (ICLR) / GRNN-Based Real-Time Fault Chain Prediction (IEEE TPS). Built with PyTorch.
A framework to build and solve POMDP problems. Documentation: https://h2r.github.io/pomdp-py/
A C++ framework for MDPs and POMDPs with Python bindings
Implementation of the Deep Q-learning algorithm to solve MDPs
Concise and friendly interfaces for defining MDP and POMDP models for use with POMDPs.jl solvers
An empirical analysis of multiple algorithms using the POMDPs.jl package and examine the performance of each algorithm
A gallery of POMDPs.jl problems
A collection of pomdp domains in robotics.
Adaptive stress testing of black-box systems within POMDPs.jl
Interface for defining discrete and continuous-space MDPs and POMDPs in python. Compatible with the POMDPs.jl ecosystem.
Pytorch code for "Learning Belief Representations for Imitation Learning in POMDPs" (UAI 2019)
Julia Implementation of the POMCP algorithm for solving POMDPs
A POMDP solver using Littman-Cassandra's Witness algorithm.
Add a description, image, and links to the pomdps topic page so that developers can more easily learn about it.
To associate your repository with the pomdps topic, visit your repo's landing page and select "manage topics."