We use Bayesian inference to derive an estimate of your model parameters given a dataset. The Baysig language gives you considerable flexibility in the model that is to be estimated, including:
The output of this calibration is the full joint probability distribution over the parameters given the data. This contains much more information than a simple point estimate. It conveys the parameter confidence and the correlation between parameters necessary for hedging against uncertainty.
We use posterior predictive checks to validate models compared to real datasets. This technique generates synthetic data sets by repeatedly simulating from the model based on parameters estimated from the data, and compares these synthetic datasets to the real dataset.
The advantage of the posterior predictive method is that it can probe very specific weaknesses of the model. In case such a model weakness is found, the model test can directly inform an improvement to the model to repair the underlying cause of test failure.
Based on a probabilistic model calibration, we can calculate almost any quantity of interest easily:
Calibrated models can be used to simulate a large number of synthetic datasets. These datasets can be used for:
The full uncertainty from Bayesian inference is propagated into any calculation made. For instance, the range of forecast values includes both the uncertainty in the estimating the current state and the uncertainty coming from the stochastic evolution of the model in the future.
You really should be doing this, too.
Baysig works on timeseries with irregular spacing, for instance financial datasets with no prices for weekends, based on stochastic calculus.