Running Simulations & Tools
This chapter assumes you already know how to assemble graphs and engines (see
the :doc:../user_guide). Here we focus on operationalising the flow:
FGBuilder → engine → Simulator → snapshots.
1. Quick Smoke Test (main.py)
The repository root includes main.py, which generates random factor graphs and runs multiple engine variants.
uv run python main.py
What it does:
Builds 10 random factor graphs (
FGBuilder.build_random_graph) with 50 variables, domain size 10, density 0.25.Configures
BPEngine,DampingSCFGEngine,SplitEngine, and TRW variants using defaults fromEngineDefaults,PolicyDefaults, andSimulatorDefaults.Runs the
Simulatoracross each engine/graph combination, timing total runtimes.Plots aggregated cost trajectories if results are available.
Troubleshooting: If you see
ModuleNotFoundError: No module named 'propflow', ensure thesrc/directory is onPYTHONPATHor run viauv run/pip install -e ..
2. Using the Simulator API
Simulator (src/propflow/simulator.py) accepts a dict of engine configurations and a list of factor graphs.
from propflow import BPEngine, FGBuilder, Simulator
from propflow.configs import CTFactories
engines = {"baseline": {"class": BPEngine}}
fg = FGBuilder.build_random_graph(
num_vars=20,
domain_size=5,
ct_factory=CTFactories.RANDOM_INT,
ct_params={"low": 0, "high": 25},
density=0.3,
)
sim = Simulator(engines)
results = sim.run_simulations([fg], max_iter=1000)
sim.plot_results()
Key kwargs:
max_iter: Defaults toSimulatorDefaults().default_max_iter(5000).log_level: Accepts symbolic levels ("INFORMATIVE","HIGH", etc.).Internally,
Simulatorparallelises runs viamultiprocessing.Poolwith graceful fallbacks to sequential execution when necessary.
3. Command-Line Interface (bp-sim)
The CLI is defined in src/propflow/cli.py. Currently it exposes a version check and placeholder messaging. Extend this module if you wish to support command-line configuration of simulations.
uv run bp-sim --version
4. Snapshot Capture & Analysis
Snapshots are always available via :attr:engine.snapshots. Persist them by
serialising to JSON and leverage the visualiser/analyzer utilities inside
propflow.snapshots.
import json
from pathlib import Path
from propflow.snapshots import SnapshotAnalyzer, AnalysisReport
from propflow.snapshots import SnapshotVisualizer
engine = BPEngine(factor_graph=fg)
engine.run(max_iter=100)
snapshots = list(engine.snapshots)
Path("results").mkdir(exist_ok=True)
with open("results/run_001_snapshots.json", "w", encoding="utf-8") as handle:
json.dump([
{
"step": snap.step,
"assignments": snap.assignments,
"global_cost": snap.global_cost,
}
for snap in snapshots
], handle, indent=2)
viz = SnapshotVisualizer(snapshots)
viz.plot_argmin_per_variable(show=True)
analyzer = SnapshotAnalyzer(snapshots)
report = AnalysisReport(analyzer)
summary = report.to_json(step_idx=len(snapshots) - 1)
5. Examples Directory
examples/quick_start.py: minimal two-variable graph and damped BP run.examples/run_simulator.py: random graph batch comparison withSimulator.examples/test_diffusion_engine.py: demonstrates spatial diffusion behaviour.Use these scripts as templates, but keep problem sizes small when validating an installation.
Run any example with:
uv run python examples/quick_start.py
6. Testing & Validation
Unit Tests
uv run python -m pytest -q
Focus areas: BP engine behaviour, policies, utilities, snapshots, and simulator orchestration.
Coverage
uv run python -m pytest --cov=src --cov-report=term-missing
Static Analysis
uv run flake8
uv run mypy src
uv run black --check .
7. Logging & Artefacts
Logs default to
configs/logs/. Ensure this directory is writable when deploying.Engine history and simulator outputs can be persisted via
Simulatoror custom scripts.Use
results/or dedicated experiment directories to store plots, JSON traces, or CSV exports.
With these tools you can execute and monitor simulations. Next, read Deployment Playbooks to see how to package and run PropFlow in different environments.