• #115 Using Time Series to Estimate Uncertainty, with Nate Haines

  • Sep 17 2024
  • Length: 1 hr and 40 mins
  • Podcast

#115 Using Time Series to Estimate Uncertainty, with Nate Haines

  • Summary

  • Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!

    • My Intuitive Bayes Online Courses
    • 1:1 Mentorship with me

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!

    Visit our Patreon page to unlock exclusive Bayesian swag ;)

    Takeaways:

    • State space models and traditional time series models are well-suited to forecast loss ratios in the insurance industry, although actuaries have been slow to adopt modern statistical methods.
    • Working with limited data is a challenge, but informed priors and hierarchical models can help improve the modeling process.
    • Bayesian model stacking allows for blending together different model predictions and taking the best of both (or all if more than 2 models) worlds.
    • Model comparison is done using out-of-sample performance metrics, such as the expected log point-wise predictive density (ELPD). Brute leave-future-out cross-validation is often used due to the time-series nature of the data.
    • Stacking or averaging models are trained on out-of-sample performance metrics to determine the weights for blending the predictions. Model stacking can be a powerful approach for combining predictions from candidate models. Hierarchical stacking in particular is useful when weights are assumed to vary according to covariates.
    • BayesBlend is a Python package developed by Ledger Investing that simplifies the implementation of stacking models, including pseudo Bayesian model averaging, stacking, and hierarchical stacking.
    • Evaluating the performance of patient time series models requires considering multiple metrics, including log likelihood-based metrics like ELPD, as well as more absolute metrics like RMSE and mean absolute error.
    • Using robust variants of metrics like ELPD can help address issues with extreme outliers. For example, t-distribution estimators of ELPD as opposed to sample sum/mean estimators.
    • It is important to evaluate model performance from different perspectives and consider the trade-offs between different metrics. Evaluating models based solely on traditional metrics can limit understanding and trust in the model. Consider additional factors such as interpretability, maintainability, and productionization.
    • Simulation-based calibration (SBC) is a valuable tool for assessing parameter estimation and model correctness. It allows for the interpretation of model parameters and the identification of coding errors.
    • In industries like insurance, where regulations may restrict model choices, classical statistical approaches still play a significant role. However, there is potential for Bayesian methods and generative AI in certain areas.

    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2
activate_samplebutton_t1

What listeners say about #115 Using Time Series to Estimate Uncertainty, with Nate Haines

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.