172
Views
0
CrossRef citations to date
0
Altmetric
Bayesian and Monte Carlo Methods

Backward Importance Sampling for Online Estimation of State Space Models

, , , ORCID Icon & ORCID Icon
Pages 1447-1460 | Received 06 May 2021, Accepted 11 Jan 2023, Published online: 17 Mar 2023

References

  • Aït-Sahalia, Y. (2008), “Closed-Form Likelihood Expansions for Multivariate Diffusions,” Annals of Statistics, 36, 906–937.
  • Andersson, P., and Kohatsu-Higa, A. (2017), “Unbiased Simulation of Stochastic Differential Equations using Parametrix Expansions,” Bernoulli, 23, 2028–2057. DOI: 10.3150/16-BEJ803.
  • Andrieu, C., and Roberts, G. O. (2009), “The Pseudo-Marginal Approach for Efficient Monte Carlo Computations,” The Annals of Statistics, 37, 697–725. DOI: 10.1214/07-AOS574.
  • Beskos, A., Papaspiliopoulos, O., and Roberts, G. O. (2006), “Retrospective Exact Simulation of Diffusion Sample Paths with Applications,” Bernoulli, 12, 1077–1098. DOI: 10.3150/bj/1165269151.
  • Beskos, A., Papaspiliopoulos, O., Roberts, G. O., and Fearnhead, P. (2006), “Exact and Computationally Efficient Likelihood-based Estimation for Discretely Observed Diffusion Processes,” (with discussion), Journal of the Royal Statistical Society, Series B, 68, 333–382. DOI: 10.1111/j.1467-9868.2006.00552.x.
  • Briers, M., Doucet, A., and Maskell, S. (2010), “Smoothing Algorithms for State–Space Models,” Annals of the Institute of Statistical Mathematics, 62, 61–89. DOI: 10.1007/s10463-009-0236-2.
  • Candanedo, L. M., Feldheim, V., and Deramaix, D. (2017), “A Methodology based on Hidden Markov Models for Occupancy Detection and a Case Study in a Low Energy Residential Building,” Energy and Buildings, 148, 327–341. DOI: 10.1016/j.enbuild.2017.05.031.
  • Cappé, O., Moulines, E., and Rydén, T. (2005), Inference in Hidden Markov Models, New York: Springer.
  • Cho, K., van Merriënboer, B., Gulcehre, C., Bougares, F., Schwenk, H., and Bengio, Y. (2014), “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar: Association for Computational Linguistics, pp. 1724–1734. DOI: 10.3115/v1/D14-1179.
  • Del Moral, P., Doucet, A., and Singh, S. S. (2010), “A Backward Particle Interpretation of Ffeynman-Kac Formulae,” ESAIM: Mathematical Modelling and Numerical Analysis, 44, 947–975. DOI: 10.1051/m2an/2010048.
  • Del Moral, P., Doucet, A., and Singh, S. S. (2015), “Uniform Stability of a Particle Approximation of the Optimal Filter Derivative,” SIAM Journal on Control and Optimization, 53, 1278–1304. DOI: 10.1137/140993703.
  • Dempster, A., Laird, N., and Rubin, D. (1977), “Maximum Likelihood from Incomplete Data via the EM Algorithm,” (with discussion), Journal of the Royal Statistical Society, Series B, 39, 1–38. DOI: 10.1111/j.2517-6161.1977.tb01600.x.
  • Douc, R., Garivier, A., Moulines, E., and Olsson, J. (2011), “Sequential Monte Carlo Smoothing for General State Space Hidden Markov Models,” The Annals of Applied Probability, 21, 2109–2145. DOI: 10.1214/10-AAP735.
  • Douc, R., Moulines, E., and Stoffer, D. (2014), Nonlinear Time Series: Theory, Methods and Applications with R Examples, Boca Raton, FL: CRC Press.
  • Dubarry, C., and Le Corff, S. (2013), “Non-asymptotic Deviation Inequalities for Smoothed Additive Functionals in Nonlinear State-Space Models,” Bernoulli, 19, 2222–2249. DOI: 10.3150/12-BEJ450.
  • Fearnhead, P., Latuszynski, K., Roberts, G. O., and Sermaidis, G. (2017), “Continuous-Time Importance Sampling: Monte Carlo Methods which Avoid Time-Discretisation Error,” arXiv preprint arXiv:1712.06201.
  • Fearnhead, P., Papaspiliopoulos, O., Roberts, G., and Stuart, A. (2010), “Random Weight Particle Filtering of Continuous Time Stochastic Processes,” Journal of the Royal Statistical Society, Series B, 72, 497–512. DOI: 10.1111/j.1467-9868.2010.00744.x.
  • Fearnhead, P., Papaspiliopoulos, O., and Roberts, G. O. (2008), “Particle Filters for Partially Observed Diffusions,” Journal of the Royal Statistical Society, Series B, 70, 755–777. DOI: 10.1111/j.1467-9868.2008.00661.x.
  • Fearnhead, P., Wyncoll, D., and Tawn, J. (2010), “A Sequential Smoothing Algorithm with Linear Computational Cost,” Biometrika, 97, 447–464. DOI: 10.1093/biomet/asq013.
  • Gassiat, É., Cleynen, A., and Robin, S. (2016), “Inference in Finite State Space Non parametric Hidden Markov Models and Applications,” Statistics and Computing. 26, 61–71. DOI: 10.1007/s11222-014-9523-8.
  • Gerber, M., and Chopin, N. (2017), “Convergence of Sequential Quasi-Monte Carlo Smoothing Algorithms,” Bernoulli, 23, 2951–2987. DOI: 10.3150/16-BEJ834.
  • Gloaguen, P., Etienne, M.-P., and Le Corff, S. (2018), “Online Sequential Monte Carlo Smoother for Partially Observed Diffusion Processes,” EURASIP Journal on Advances in Signal Processing, 2018, 9. DOI: 10.1186/s13634-018-0530-3.
  • Gloaguen, P., Le Corff, S., and Olsson, J. (2022), “A Pseudo-Marginal Sequential Monte Carlo Online Smoothing Algorithm,” Bernoulli, 28, 2606–2633. DOI: 10.3150/21-BEJ1431.
  • Gordon, N. J., Salmond, D. J., and Smith, A. F. (1993), “Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation,” in IEE proceedings F (radar and signal processing) (Vol. 140), pp. 107–113. IET. DOI: 10.1049/ip-f-2.1993.0015.
  • Hansen, N. (2006), “The CMA Evolution Strategy: A Comparing Review,” in Towards a New Evolutionary Computation, eds. J. A. Lozano, P. Larrañaga, I. Inza, and E. Bengoetxea, pp. 75–102, Berlin: Springer.
  • Hening, A., and Nguyen, D. H. (2018), “Persistence in Stochastic Lotka–Volterra Food Chains with Intraspecific Competition,” Bulletin of Mathematical Biology, 80, 2527–2560. DOI: 10.1007/s11538-018-0468-5.
  • Hochreiter, S., and Schmidhuber, J. (1997), “Long Short-Term Memory,” Neural Computation, 9, 1735–1780. DOI: 10.1162/neco.1997.9.8.1735.
  • Kushner, H. J., and Yin, G. G. (1997), Stochastic Approximation Algorithms and Applications, New York: Springer.
  • Le Gland, F., and Mevel, L. (1997), “Recursive Estimation in HMMs,” in Proceedings of the IEEE Conference on Decision and Control, pp. 3468–3473.
  • Martin, J. S., Jasra, A., Singh, S. S., Whiteley, N., Del Moral, P., and McCoy, E. (2014), “Approximate Bayesian Computation for Smoothing,” Stochastic Analysis and Applications, 32, 397–420.
  • Michelot, T., Langrock, R., and Patterson, T. A. (2016), “movehmm: An r Package for the Statistical Modelling of Animal Movement Data using Hidden Markov Models,” Methods in Ecology and Evolution, 7, 1308–1315.
  • Mikolov, T., Karafiát, M., Burget, L., Černockỳ, J., and Khudanpur, S. (2010), “Recurrent Neural Network based Language Model,” in Eleventh Annual Conference of the International Speech Communication Association.
  • Mozer, M. C. (1989), “A Focused Backpropagation Algorithm for Temporal Pattern Recognition,” Complex Systems, 3, 349–381.
  • Nguyen, T., Le Corff, S., and Moulines, É. (2017), “On the Two-Filter Approximations of Marginal Smoothing Distributions in General State-Space Models,” Advances in Applied Probability, 50, 154–177.
  • Odum, E. P., and Barrett, G. W. (1971), Fundamentals of Ecology (Vol. 3), Philadelphia: Saunders.
  • Olsson, J., and Alenlöv, J. W. (2020), “Particle-based Online Estimation of Tangent Filters with Application to Parameter Estimation in Nonlinear State-Space Models,” Annals of the Institute of Statistical Mathematics, 72, 545–576.
  • Olsson, J., and Ströjby, J. (2011), “Particle-based Likelihood Inference in Partially Observed Diffusion Processes using Generalised Poisson Estimators,” Electronic Journal of Statistics, 5, 1090–1122.
  • Olsson, J., and Westerborn, J. (2017), “Efficient Particle-based Online Smoothing in General Hidden Markov Models: The Paris Algorithm,” Bernoulli, 23, 1951–1996.
  • Polyak, B. T., and Juditsky, A. B. (1992), “Acceleration of Stochastic Approximation by Averaging,” SIAM Journal on Control and Optimization, 30, 838–855.
  • Rabiner, L. (1989), “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” in Proceedings of the IEEE, pp. 257–286.
  • Särkkä, S. (2013), Bayesian Filtering and Smoothing, New York: Cambridge University Press.
  • Särkkä, S., Vehtari, A., and Lampinen, J. (2007), “Rao-Blackwellized Particle Filter for Multiple Target Tracking,” Inofrmation Fusion, 8, 2–15.
  • Sutskever, I., Martens, J., and Hinton, G. E. (2011), “Generating Text with Recurrent Neural Networks,” in ICML.
  • Sutskever, I., Vinyals, O., and Le, Q. V. (2014), “Sequence to Sequence Learning with Neural Networks,” arXiv preprint arXiv:1409.3215.
  • Tadić, V. (2010), “Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models,” IEEE Transactions on Information Theory, 56, 6406–6432.
  • Taylor, J. S. (1982), “Financial Returns Modelled by the Product of Two Stochastic Processes–A Study of the Daily Sugar Prices 1961-79,” in Time Series Analysis: Theory and Practice (Vol. 1), ed. O. D. Anderson, pp. 203–226, Amsterdam: North-Holland.
  • Wahl, J. C. (2021), stochvolTMB: Likelihood Estimation of Stochastic Volatility Models, R package version 0.2.0.
  • Wang, X., Lebarbier, E., Aubert, J., and Robin, S. (2017), “Variational Inference for Coupled Hidden Markov Models Applied to the Joint Detection of Copy Number Variations,” The International Journal of Biostatistics, 15.
  • Yau, C., Papaspiliopoulos, O., Roberts, G. O., and Holmes, C. (2011), “Bayesian Non-parametric Hidden Markov Models with Applications in Genomics,” Journal of the Royal Statistical Society, Series B, 73, 1–21.
  • Yonekura, S., and Beskos, A. (2020), “Online Smoothing for Diffusion Processes Observed with Noise,” ArXiv:2003.12247.
  • Zucchini, W., Mac Donald, I., and Langrock, R. (2017), Hidden Markov Models for Time Series: An Introduction Using R, Boca Raton, FL: CRC Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.