References
- Achituve, I., Shamsian, A., Navon, A., Chechik, G., and Fetaya, E. (2021), “Personalized Federated Learning with Gaussian Processes,” in Advances in Neural Information Processing Systems (Vol. 34), pp. 8392–8406, Curran Associates, Inc.
- Álvarez, M. A., and Lawrence, N. D. (2011), “Computationally Efficient Convolved Multiple Output Gaussian Processes,” Journal of Machine Learning Research, 12, 1459–1500.
- Arivazhagan, M. G., Aggarwal, V., Singh, A. K., and Choudhary, S. (2019), “Federated Learning with Personalization Layers,” arXiv preprint arXiv:1912.00818.
- Bae, B., Kim, H., Lim, H., Liu, Y., Han, L. D., and Freeze, P. B. (2018), “Missing Data Imputation for Traffic Flow Speed Using Spatio-Temporal Cokriging,” Transportation Research Part C: Emerging Technologies, 88, 124–139. DOI: 10.1016/j.trc.2018.01.015.
- Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., and Shmatikov, V. (2020), “How to Backdoor Federated Learning,” in International Conference on Artificial Intelligence and Statistics (Vo. 108), pp. 2938–2948, PMLR.
- Barry, R. P., Jay, M., and Hoef, V. (1996), “Blackbox Kriging: Spatial Prediction Without Specifying Variogram Models,” Journal of Agricultural, Biological, and Environmental Statistics, 1, 297–322. DOI: 10.2307/1400521.
- Birolini, A. (2013), Reliability Engineering: Theory and Practice, Berlin: Springer.
- Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017), “Variational Inference: A Review for Statisticians,” Journal of the American statistical Association, 112, 859–877. DOI: 10.1080/01621459.2017.1285773.
- Boyle, P., and Frean, M. (2004), “Dependent Gaussian Processes,” in Advances in Neural Information Processing Systems (Vol. 17), MIT Press.
- Cao, Y., and Fleet, D. J. (2014), “Generalized Product of Experts for Automatic and Principled Fusion of Gaussian Process Predictions,” arXiv preprint arXiv:1410.7827.
- Chen, H., Zheng, L., AL Kontar, R., and Raskutti, G. (2020), “Stochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes,” in Advances in Neural Information Processing Systems (Vol. 33), pp. 2722–2733, Curran Associates, Inc.
- Chen, J., Mak, S., Joseph, V. R., and Zhang, C. (2021), “Function-on-Function Kriging, with Applications to Three-Dimensional Printing of Aortic Tissues,” Technometrics, 63, 384–395. DOI: 10.1080/00401706.2020.1801255.
- Cheng, L.-F., Dumitrascu, B., Darnell, G., Chivers, C., Draugelis, M., Li, K., and Engelhardt, B. E. (2020), “Sparse Multi-Output Gaussian Processes for Online Medical Time Series Prediction,” BMC Medical Informatics and Decision Making, 20, 1–23. DOI: 10.1186/s12911-020-1069-4.
- Chung, S., Al Kontar, R., and Wu, Z. (2022), “Weakly Supervised Multi-Output Regression via Correlated Gaussian Processes,” INFORMS Journal on Data Science, 1, 115–137. DOI: 10.1287/ijds.2022.0018.
- Damianou, A., and Lawrence, N. D. (2013), “Deep Gaussian Processes,” in Artificial Intelligence and Statistics (Vol. 31), pp. 207–215, PMLR.
- Deisenroth, M., and Ng, J. W. (2015), “Distributed Gaussian Processes,” in International Conference on Machine Learning (Vol. 37), pp. 1481–1490, PMLR.
- Deng, X., Lin, C. D., Liu, K.-W., and Rowe, R. (2017), “Additive Gaussian Process for Computer Models with Qualitative and Quantitative Factors,” Technometrics, 59, 283–292. DOI: 10.1080/00401706.2016.1211554.
- Fricker, T. E., Oakley, J. E., and Urban, N. M. (2013), “Multivariate Gaussian Process Emulators with Nonseparable Covariance Structures,” Technometrics, 55, 47–56. DOI: 10.1080/00401706.2012.715835.
- Gotway, C. A., and Young, L. J. (2002), “Combining Incompatible Spatial Data,” Journal of the American Statistical Association, 97, 632–648. DOI: 10.1198/016214502760047140.
- Guhaniyogi, R., and Banerjee, S. (2018), “Meta-Kriging: Scalable Bayesian Modeling and Inference for Massive Spatial Datasets,” Technometrics, 60, 430–444. DOI: 10.1080/00401706.2018.1437474.
- Handcock, M. S., and Stein, M. L. (1993), “A Bayesian Analysis of Kriging,” Technometrics, 35, 403–410. DOI: 10.1080/00401706.1993.10485354.
- Hanzely, F., and Richtárik, P. (2020), “Federated Learning of a Mixture of Global and Local Models,” arXiv preprint arXiv:2002.05516.
- Hensman, J., Fusi, N., and Lawrence, N. D. (2013), “Gaussian Processes for Big Data,” in Uncertainty in Artificial Intelligence, pp. 282–290, Arlington, VI: AUAI Press.
- Higdon, D. (2002), “Space and Space-Time Modeling Using Process Convolutions,” in Quantitative Methods for Current Environmental Issues, eds. C. W. Anderson, V. Barnett, P. C. Chatwin, and A. H. El-Shaarawi, pp. 37–56, London: Springer.
- Huang, J., and Gramacy, R. B. (2021), “Multi-Output Calibration of a Honeycomb Seal via On-site Surrogates,” arXiv preprint arXiv:2102.00391.
- Journel, A. G., and Huijbregts, C. J. (1976), Mining Geostatistics, Caldwell, NJ: The Blackburn Press.
- Kingma, D. P., and Ba, J. (2015), “Adam: A Method for Stochastic Optimization,” in International Conference on Learning Representations.
- Koller, D., and Friedman, N. (2009), Probabilistic Graphical Models: Principles and Techniques, Cambridge, MA: MIT Press.
- Kontar, R., Shi, N., Yue, X., Chung, S., Byon, E., Chowdhury, M., Jin, J., Kontar, W., Masoud, N., Nouiehed, M., et al. (2021), “The Internet of Federated Things (IoFT),” IEEE Access, 9, 156071–156113. DOI: 10.1109/ACCESS.2021.3127448.
- Kontar, R., Son, J., Zhou, S., Sankavaram, C., Zhang, Y., and Du, X. (2017), “Remaining Useful Life Prediction based on the Mixed Effects Model with Mixture Prior Distribution,” IISE Transactions, 49, 682–697. DOI: 10.1080/24725854.2016.1263771.
- Kontar, R., Zhou, S., Sankavaram, C., Du, X., and Zhang, Y. (2018), “Nonparametric Modeling and Prognosis of Condition Monitoring Signals using Multivariate Gaussian Convolution Processes,” Technometrics, 60, 484–496. DOI: 10.1080/00401706.2017.1383310.
- Kontoudis, G. P., and Stilwell, D. J. (2022), “Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning,” arXiv preprint arXiv:2203.02865.
- Laínez-Aguirre, J. M., Mockus, L., Orçun, S., Blau, G., and Reklaitis, G. V. (2016), “A Decomposition Strategy for the Variational Inference of Complex Systems,” Technometrics, 58, 84–94. DOI: 10.1080/00401706.2014.995833.
- Lee, J., Kwon, D., and Pecht, M. G. (2018), “Reduction of Li-ion Battery Qualification Time based on Prognostics and Health Management,” IEEE Transactions on Industrial Electronics, 66, 7310–7315. DOI: 10.1109/TIE.2018.2880701.
- Li, J., and Zimmerman, D. L. (2015), “Model-based Sampling Design for Multivariate Geostatistics,” Technometrics, 57, 75–86. DOI: 10.1080/00401706.2013.873003.
- Li, T., Hu, S., Beirami, A., and Smith, V. (2021), “Ditto: Fair and Robust Federated Learning through Personalization,” in International Conference on Machine Learning (Vol. 139), pp. 6357–6368. PMLR.
- Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. (2020a), “Federated Optimization in Heterogeneous Networks,” in Proceedings of Machine Learning and Systems (Vol. 2), pp. 429–450.
- Li, T., Sanjabi, M., Beirami, A., and Smith, V. (2020b), “Fair Resource Allocation in Federated Learning,” in International Conference on Learning Representations.
- Li, X., Huang, K., Yang, W., Wang, S., and Zhang, Z. (2019), “On the Convergence of FedAvg on non-i.i.d Data,” arXiv preprint arXiv:1907.02189.
- Li, Y., Zhou, Q., Huang, X., and Zeng, L. (2018), “Pairwise Estimation of Multivariate Gaussian Process Models with Replicated Observations: Application to Multivariate Profile Monitoring,” Technometrics, 60, 70–78. DOI: 10.1080/00401706.2017.1305298.
- Liang, P. P., Liu, T., Ziyin, L., Allen, N. B., Auerbach, R. P., Brent, D., Salakhutdinov, R., and Morency, L.-P. (2020), “Think Locally, Act Globally: Federated Learning with Local and Global Representations,” arXiv preprint arXiv:2001.01523.
- Lindstrom, M. J., and Bates, D. M. (1988), “Newton–Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data,” Journal of the American Statistical Association, 83, 1014–1022. DOI: 10.2307/2290128.
- Majumdar, A., and Gelfand, A. E. (2007), “Multivariate Spatial Modeling for Geostatistical Data Using Convolved Covariance Functions,” Mathematical Geology, 39, 225–245. DOI: 10.1007/s11004-006-9072-6.
- Mak, S., Sung, C.-L., Wang, X., Yeh, S.-T., Chang, Y.-H., Joseph, V. R., Yang, V., and Wu, C. J. (2018), “An Efficient Surrogate Model for Emulation and Physics Extraction of Large Eddy Simulations,” Journal of the American Statistical Association, 113, 1443–1456. DOI: 10.1080/01621459.2017.1409123.
- McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017), “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Artificial Intelligence and Statistics, pp. 1273–1282, PMLR.
- Moreno-Muñoz, P., Artés, A., and Álvarez, M. (2018), “Heterogeneous Multi-Output Gaussian Process Prediction,” in Advances in Neural Information Processing Systems (Vol. 31), Curran Associates, Inc.
- Moreno-Muñoz, P., Artes, A., and Álvarez, M. (2021), “Modular Gaussian Processes for Transfer Learning,” in Advances in Neural Information Processing Systems (Vol. 34), pp. 24730–24740. Curran Associates, Inc.
- Nguyen, T. V., Bonilla, E. V., et al. (2014), “Collaborative Multi-Output Gaussian Processes,” in UAI, pp. 643–652. Citeseer.
- Park, J., Han, D.-J., Choi, M., and Moon, J. (2021), “Sageflow: Robust Federated Learning against both Stragglers and Adversaries,” in Advances in Neural Information Processing Systems (Vol. 34), pp. 840–851, Curran Associates, Inc.
- Perdikaris, P., Raissi, M., Damianou, A., Lawrence, N. D., and Karniadakis, G. E. (2017), “Nonlinear Information Fusion Algorithms for Data-Efficient Multi-Fidelity Modelling,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473, 20160751. DOI: 10.1098/rspa.2016.0751.
- Salimans, T., Kingma, D., and Welling, M. (2015), “Markov Chain Monte Carlo and Variational Inference: Bridging the Gap,” in International Conference on Machine Learning (Vol. 37), pp. 1218–1226. PMLR.
- Sattler, F., Wiedemann, S., Müller, K.-R., and Samek, W. (2019), “Robust and Communication-Efficient Federated Learning from Non-iid Data,” IEEE Transactions on Neural Networks and Learning Systems, 31, 3400–3413. DOI: 10.1109/TNNLS.2019.2944481.
- Sauer, A., Gramacy, R. B., and Higdon, D. (2023), “Active Learning for Deep Gaussian Process Surrogates,” Technometrics, 65, 4–18. DOI: 10.1080/00401706.2021.2008505.
- Shi, N., and Kontar, R. A. (2022), “Personalized Federated Learning via Domain Adaptation with an Application to Distributed 3d Printing,” Technometrics, 1–22 (just-accepted). DOI: 10.1080/00401706.2022.2157882.
- Snelson, E., and Ghahramani, Z. (2005), “Sparse Gaussian Processes Using Pseudo-Inputs,” in Advances in Neural Information Processing Systems (Vol. 18), MIT Press.
- Son, J., Zhou, Q., Zhou, S., Mao, X., and Salman, M. (2013), “Evaluation and Comparison of Mixed Effects Model based Prognosis for Hard Failure,” IEEE Transactions on Reliability, 62, 379–394. DOI: 10.1109/TR.2013.2259205.
- Sun, Z., Kairouz, P., Suresh, A. T., and McMahan, H. B. (2019), “Can You Really Backdoor Federated Learning?” arXiv preprint arXiv:1911.07963.
- Dinh, C. T., Tran, N. H., and Nguyen, T. D. (2020), “Personalized Federated Learning with Moreau Envelopes,” in Advances in Neural Information Processing Systems (Vol. 33), pp. 21394–21405. Curran Associates, Inc.
- Titsias, M. (2009), “Variational Learning of Inducing Variables in Sparse Gaussian Processes,” in Artificial Intelligence and Statistics, pp. 567–574. PMLR.
- Tresp, V. (2000), “A Bayesian Committee Machine,” Neural Computation, 12, 2719–2741. DOI: 10.1162/089976600300014908.
- Vapnik, V. (1991), “Principles of Risk Minimization for Learning Theory,” in Advances in Neural Information Processing Systems (Vol. 4), Morgan-Kaufmann.
- Ver Hoef, J. M., and Barry, R. P. (1998), “Constructing and Fitting Models for Cokriging and Multivariable Spatial Prediction,” Journal of Statistical Planning and Inference, 69, 275–294. DOI: 10.1016/S0378-3758(97)00162-6.
- Wang, K., Mathews, R., Kiddon, C., Eichner, H., Beaufays, F., and Ramage, D. (2019), “Federated Evaluation of On-device Personalization,” arXiv preprint arXiv:1910.10252.
- Xie, C., Chen, M., Chen, P.-Y., and Li, B. (2021), “CRFL: Certifiably Robust Federated Learning against Backdoor Attacks,” in International Conference on Machine Learning (Vol. 139), pp. 11372–11382. PMLR.
- Yu, H., Guo, K., Karami, M., Chen, X., Zhang, G., and Poupart, P. (2022), “Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process,” arXiv preprint arXiv:2206.06357.
- Yue, X., and Al Kontar, R. (2021), “An Alternative Gaussian Process Objective based on the Rényi Divergence,” preprint.
- Yue, X., and Kontar, R. A. (2021), “Federated Gaussian Process: Convergence, Automatic Personalization and Multi-Fidelity Modeling,” arXiv preprint arXiv:2111.14008.
- Yue, X., Nouiehed, M., and Al Kontar, R. (2022), “GIFAIR-FL: A Framework for Group and Individual Fairness in Federated Learning,” INFORMS Journal on Data Science, early access. DOI: 10.1287/ijds.2022.0022.
- Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., and Chandra, V. (2018), “Federated Learning with Non-i.i.d Data,” arXiv preprint arXiv:1806.00582.
- Zhu, H., Xu, J., Liu, S., and Jin, Y. (2021), “Federated Learning on Non-i.i.d Data: A Survey,” Neurocomputing, 465, 371–390. DOI: 10.1016/j.neucom.2021.07.098.