556
Views
1
CrossRef citations to date
0
Altmetric
Articles

Statistical Process Monitoring of Artificial Neural Networks

ORCID Icon, ORCID Icon & ORCID Icon
Pages 104-117 | Received 06 Sep 2022, Accepted 10 Jul 2023, Published online: 22 Sep 2023

References

  • Abraham, Y., Gerrits, B., Ludwig, M.-G., Rebhan, M., and Gubser Keller, C. (2017), “Exploring Glucocorticoid Receptor Agonists Mechanism of Action Through Mass Cytometry and Radial Visualizations,” Cytometry Part B: Clinical Cytometry, 92, 42–56. DOI: 10.1002/cyto.b.21499.
  • Aldridge, I., and Avellaneda, M. (2019), “Neural Networks in Finance: Design and Performance,” The Journal of Financial Data Science, 1, 39–62. DOI: 10.3905/jfds.2019.1.4.039.
  • Ali, S., Pievatolo, A., and Göb, R. (2016), “An Overview of Control Charts for High-Quality Processes,” Quality and Reliability Engineering International, 32, 2171–2189. DOI: 10.1002/qre.1957.
  • Angermueller, C., Pärnamaa, T., Parts, L., and Stegle, O. (2016), “Deep Learning for Computational Biology,” Molecular Systems Biology, 12, 878. DOI: 10.15252/msb.20156651.
  • Apsemidis, A., Psarakis, S., and Moguerza, J. M. (2020), “A Review of Machine Learning Kernel Methods in Statistical Process Monitoring,” Computers & Industrial Engineering, 142, 106376. DOI: 10.1016/j.cie.2020.106376.
  • Baena-Garc ia, M., del Campo-Ávila, J., Fidalgo, R., Bifet, A., Gavalda, R., and Morales-Bueno, R. (2006), “Early Drift Detection Method,” in Fourth International Workshop on Knowledge Discovery from Data Streams (Vol. 6), pp. 77–86.
  • Ball, N. M., Brunner, R. J., Myers, A. D., and Tcheng, D. (2006), “Robust Machine Learning Applied to Astronomical Data Sets. I. Star-galaxy Classification of the Sloan Digital Sky Survey DR3 using Decision Trees,” The Astrophysical Journal, 650, 497. DOI: 10.1086/507440.
  • Barale, M., and Shirke, D. (2019), “Nonparametric Control Charts based on Data Depth for Location Parameter,” Journal of Statistical Theory and Practice, 13, 1–19. DOI: 10.1007/s42519-019-0041-z.
  • Baranowski, J., Dudek, A., and Mularczyk, R. (2021), “Transient Anomaly Detection Using Gaussian Process Depth Analysis,” in 2021 25th International Conference on Methods and Models in Automation and Robotics (MMAR), pp. 221–226, IEEE. DOI: 10.1109/MMAR49549.2021.9528470.
  • Bell, R. C., Jones-Farmer, L. A., and Billor, N. (2014), “A Distribution-Free Multivariate Phase I Location Control Chart for Subgrouped Data from Elliptical Distributions,” Technometrics, 56, 528–538. DOI: 10.1080/00401706.2013.879264.
  • Bifet, A., and Gavalda, R. (2007), “Learning from Time-Changing Data with Adaptive Windowing,” in Proceedings of the 2007 SIAM International Conference on Data Mining, pp. 443–448.
  • Bifet, A., Gavalda, R., Holmes, G., and Pfahringer, B. (2018), Machine Learning for Data Streams: With Practical Examples in MOA, Cambridge, MA: MIT Press.
  • Boone, J., and Chakraborti, S. (2012), “Two Simple Shewhart-Type Multivariate Nonparametric Control Charts,” Applied Stochastic Models in Business and Industry, 28, 130–140. DOI: 10.1002/asmb.900.
  • Breiman, L., Meisel, W., and Purcell, E. (1977), “Variable Kernel Estimates of Multivariate Densities,” Technometrics, 19, 135–144. DOI: 10.1080/00401706.1977.10489521.
  • Breunig, M. M., Kriegel, H.-P., Ng, R. T., and Sander, J. (2000), “LOF: Ídentifying Density-based Local Outliers,” in Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, pp. 93–104.
  • Caro, L. D., Frias-Martinez, V., and Frias-Martinez, E. (2010), “Analyzing the Role of Dimension Arrangement for Data Visualization in Radviz,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 125–132, Springer.
  • Cascos, I., and López-Díaz, M. (2018), “Control Charts based on Parameter Depths,” Applied Mathematical Modelling, 53, 487–509. DOI: 10.1016/j.apm.2017.09.009.
  • Celano, G., Castagliola, P., Fichera, S., and Nenes, G. (2013), “Performance of t Control Charts in Short Runs with Unknown Shift Sizes,” Computers & Industrial Engineering, 64, 56–68. DOI: 10.1016/j.cie.2012.10.003.
  • Chiroma, H., Abdullahi, U. A., Alarood, A. A., Gabralla, L. A., Rana, N., Shuib, L., Hashem, I. A. T., Gbenga, D. E., Abubakar, A. I., Zeki, A. M., et al. (2018), “Progress on Artificial Neural Networks for Big Data Analytics: A Survey,” IEEE Access, 7, 70535–70551. DOI: 10.1109/ACCESS.2018.2880694.
  • Cook, R. D., and Ni, L. (2005), “Sufficient Dimension Reduction via Inverse Regression: A Minimum Discrepancy Approach,” Journal of the American Statistical Association, 100, 410–428. DOI: 10.1198/016214504000001501.
  • Corbière, C., Thome, N., Bar-Hen, A., Cord, M., and Pérez, P. (2019), “Addressing Failure Prediction by Learning Model Confidence,” in Advances in Neural Information Processing Systems (Vol. 32).
  • Dang, X., and Serfling, R. (2010), “Nonparametric Depth-based Multivariate Outlier Identifiers, and Masking Robustness Properties,” Journal of Statistical Planning and Inference, 140, 198–213. DOI: 10.1016/j.jspi.2009.07.004.
  • Demšar, J., and Bosnić, Z. (2018), “Detecting Concept Drift in Data Streams Using Model Explanation,” Expert Systems with Applications, 92, 546–559. DOI: 10.1016/j.eswa.2017.10.003.
  • Donoho, D. L., and Gasko, M. (1992), “Breakdown Properties of Location Estimates based on Halfspace Depth and Projected Outlyingness,” The Annals of Statistics, 20, 1803–1827. DOI: 10.1214/aos/1176348890.
  • Dyckerhoff, R., Mozharovskyi, P., and Nagy, S. (2021), “Approximate Computation of Projection Depths,” Computational Statistics & Data Analysis, 157, 107166. DOI: 10.1016/j.csda.2020.107166.
  • Emambocus, B. A. S., Jasser, M. B., and Amphawan, A. (2023), “A Survey on the Optimization of Artificial Neural Networks using Swarm Intelligence Algorithms,” IEEE Access, 11, 1280–1294. DOI: 10.1109/ACCESS.2022.3233596.
  • Fang, Z., Li, Y., Lu, J., Dong, J., Han, B., and Liu, F. (2022), “Is Out-of-Distribution Detection Learnable?” arXiv preprint arXiv:2210.14707.
  • Forrest, S., Perelson, A. S., Allen, L., and Cherukuri, R. (1994), “Self-Nonself Discrimination in a Computer,” in Proceedings of 1994 IEEE Computer Society Symposium on Research in Security and Privacy, pp. 202–212, IEEE.
  • Fort, S., Ren, J., and Lakshminarayanan, B. (2021), “Exploring the Limits of Out-of-Distribution Detection,” in Advances in Neural Information Processing Systems (Vol. 34), pp. 7068–7081.
  • Francisci, G., Nieto-Reyes, A., and Agostinelli, C. (2019), “Generalization of the Simplicial Depth: No Vanishment Outside the Convex Hull of the Distribution Support,” arXiv preprint arXiv:1909.02739.
  • Gama, J., Medas, P., Castillo, G., and Rodrigues, P. (2004), “Learning with Drift Detection,” in Brazilian Symposium on Artificial Intelligence, pp. 286–295, Springer.
  • Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., and Bouchachia, A. (2014), “A Survey on Concept Drift Adaptation,” ACM Computing Surveys (CSUR), 46, 1–37. DOI: 10.1145/2523813.
  • Gao, X., Pishdad-Bozorgi, P., Shelden, D. R., and Hu, Y. (2019), “Machine Learning Applications in Facility Life-Cycle Cost Analysis: A Review,” Computing in Civil Engineering 2019: Smart Cities, Sustainability, and Resilience, pp. 267–274.
  • Garcia, K. D., Poel, M., Kok, J. N., and de Carvalho, A. C. (2019), “Online Clustering for Novelty Detection and Concept Drift in Data Streams,” in EPIA Conference on Artificial Intelligence, pp. 448–459, Springer.
  • Gawlikowski, J., Tassi, C. R. N., Ali, M., Lee, J., Humt, M., Feng, J., Kruspe, A., Triebel, R., Jung, P., Roscher, R., et al. (2021), “A Survey of Uncertainty in Deep Neural Networks,” arXiv preprint arXiv:2107.03342.
  • Gemaque, R. N., Costa, A. F. J., Giusti, R., and Dos Santos, E. M. (2020), “An Overview of Unsupervised Drift Detection Methods,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10, e1381.
  • Ghazikhani, A., Monsefi, R., and Yazdi, H. S. (2013), “Ensemble of Online Neural Networks for Non-stationary and Imbalanced Data Streams,” Neurocomputing, 122, 535–544. DOI: 10.1016/j.neucom.2013.05.003.
  • Ghosh, A. K., Chaudhuri, P., and Sengupta, D. (2006), “Classification Using Kernel Density Estimates: Multiscale Analysis and Visualization,” Technometrics, 48, 120–132. DOI: 10.1198/004017005000000391.
  • Goldstein, M., and Uchida, S. (2016), “A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data,” PloS One, 11, e0152173. DOI: 10.1371/journal.pone.0152173.
  • Goodfellow, I., Bengio, Y., and Courville, A. (2016), Deep Learning, Cambridge, MA: MIT Press.
  • Guttormsson, S. E., Marks, R., El-Sharkawi, M., and Kerszenbaum, I. (1999), “Elliptical Novelty Grouping for On-line Short-Turn Detection of Excited Running Rotors,” IEEE Transactions on Energy Conversion, 14, 16–22. DOI: 10.1109/60.749142.
  • Haque, A., Khan, L., Baron, M., Thuraisingham, B., and Aggarwal, C. (2016), “Efficient Handling of Concept Drift and Concept Evolution over Stream Data,” in 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pp. 481–492. IEEE. DOI: 10.1109/ICDE.2016.7498264.
  • Heipke, C., and Rottensteiner, F. (2020), “Deep Learning for Geometric and Semantic Tasks in Photogrammetry and Remote Sensing,” Geo-spatial Information Science, 23, 10–19. DOI: 10.1080/10095020.2020.1718003.
  • Hendrycks, D., and Gimpel, K. (2016), “A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks,” arXiv preprint arXiv:1610.02136.
  • Hermans, M., and Schrauwen, B. (2013), “Training and Analysing Deep Recurrent Neural Networks,” in Advances in Neural Information Processing Systems (Vol. 26), pp. 190–198.
  • Hoffman, P., Grinstein, G., and Pinkney, D. (1999), “Dimensional Anchors: A Graphic Primitive for Multidimensional Multivariate Information Visualizations,” in Proceedings of the 1999 Workshop on New Paradigms in Information Visualization and Manipulation in Conjunction with the Eighth ACM International Conference on Information and Knowledge Management, pp. 9–16.
  • Hu, H., Kantardzic, M., and Sethi, T. S. (2020), “No Free Lunch Theorem for Concept Drift Detection in Streaming Data Classification: A Review,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10, e1327.
  • Huang, J., Zhu, Q., Yang, L., and Feng, J. (2016), “A Non-parameter Outlier Detection Algorithm based on Natural Neighbor,” Knowledge-Based Systems, 92, 71–77. DOI: 10.1016/j.knosys.2015.10.014.
  • Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., and Tygar, J. D. (2011), “Adversarial Machine Learning,” in Proceedings of the 4th ACM workshop on Security and Artificial Intelligence, pp. 43–58. DOI: 10.1145/2046684.2046692.
  • Ivanovs, J., and Mozharovskyi, P. (2021), “Distributionally Robust Halfspace Depth,” arXiv preprint arXiv:2101.00726.
  • Jones-Farmer, L. A., Woodall, W. H., Steiner, S. H., and Champ, C. W. (2014), “An Overview of Phase I Analysis for Process Improvement and Monitoring,” Journal of Quality Technology, 46, 265–280. DOI: 10.1080/00224065.2014.11917969.
  • Kan, S. H. (2003), Metrics and Models in Software Quality Engineering, Reading, MA: Addison-Wesley Professional.
  • Kim, Y., and Park, C. H. (2017), “An Efficient Concept Drift Detection Method for Streaming Data Under Limited Labeling,” IEICE Transactions on Information and Systems, 100, 2537–2546. DOI: 10.1587/transinf.2017EDP7091.
  • Klinkenberg, R., and Joachims, T. (2000), “Detecting Concept Drift with Support Vector Machines,” in ICML, pp. 487–494.
  • Klinkenberg, R., and Renz, I. (1998), “Adaptive Information Filtering: Learning in the Presence of Concept Drifts,” Learning for Text Categorization, pp. 33–40.
  • Krawczyk, B., and Woźniak, M. (2015), “One-Class Classifiers with Incremental Learning and Forgetting for Data Streams with Concept Drift,” Soft Computing, 19, 3387–3400. DOI: 10.1007/s00500-014-1492-5.
  • Krizhevsky, A., Hinton, G., et al. (2009), “Learning Multiple Layers of Features from Tiny Images,” Toronto, ON, Canada.
  • Kuncheva, L. I. (2009), “Using Control Charts for Detecting Concept Change in Streaming Data,” Bangor University, p. 48.
  • Lange, T., Mosler, K., and Mozharovskyi, P. (2014), “Fast Nonparametric Classification based on Data Depth,” Statistical Papers, 55, 49–69. DOI: 10.1007/s00362-012-0488-4.
  • Lee, K., Lee, K., Lee, H., and Shin, J. (2018), “A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks,” in Advances in Neural Information Processing Systems (Vol. 31).
  • Lee, S., Kwak, M., Tsui, K.-L., and Kim, S. B. (2019), “Process Monitoring Using Variational Autoencoder for High-Dimensional Nonlinear Processes,” Engineering Applications of Artificial Intelligence, 83, 13–27. DOI: 10.1016/j.engappai.2019.04.013.
  • Li, J., Cuesta-Albertos, J. A., and Liu, R. Y. (2012), “DD-classifier: Nonparametric Classification Procedure based on DD-plot,” Journal of the American Statistical Association, 107, 737–753. DOI: 10.1080/01621459.2012.688462.
  • Li, K.-C. (1991), “Sliced Inverse Regression for Dimension Reduction,” Journal of the American Statistical Association, 86, 316–327. DOI: 10.1080/01621459.1991.10475035.
  • Li, L. (2007), “Sparse Sufficient Dimension Reduction,” Biometrika, 94, 603–613. DOI: 10.1093/biomet/asm044.
  • Li, L., and Nachtsheim, C. J. (2006), “Sparse Sliced Inverse Regression,” Technometrics, 48, 503–510. DOI: 10.1198/004017006000000129.
  • Li, P., Wu, X., Hu, X., and Wang, H. (2015), “Learning Concept-Drifting Data Streams with Random Ensemble Decision Trees,” Neurocomputing, 166, 68–83. DOI: 10.1016/j.neucom.2015.04.024.
  • Lin, Q., Zhao, Z., and Liu, J. S. (2019), “Sparse Sliced Inverse Regression via Lasso,” Journal of the American Statistical Association, 114, 1726–1739. DOI: 10.1080/01621459.2018.1520115.
  • Liu, F. T., Ting, K. M., and Zhou, Z.-H. (2008), “Isolation Forest,” in 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422.
  • Liu, R. Y. (1990), “On a Notion of Data Depth based on Random Simplices,” The Annals of Statistics, pp. 405–414. DOI: 10.1214/aos/1176347507.
  • Liu, R. Y. (1995), “Control Charts for Multivariate Processes,” Journal of the American Statistical Association, 90, 1380–1387.
  • Liu, R. Y., Serfling, R. J., and Souvaine, D. L. (2006), Data Depth: Robust Multivariate Analysis, Computational Geometry, and Applications (Vol. 72), Providence, RI: American Mathematical Society.
  • Liu, R. Y., and Singh, K. (1993), “A Quality Index based on Data Depth and Multivariate Rank Tests,” Journal of the American Statistical Association, 88, 252–260. DOI: 10.2307/2290720.
  • Lu, J., Liu, A., Dong, F., Gu, F., Gama, J., and Zhang, G. (2018), “Learning Under Concept Drift: A Review,” IEEE Transactions on Knowledge and Data Engineering, 31, 2346–2363. DOI: 10.1109/TKDE.2018.2876857.
  • Mahalanobis, P. C. (1936), “On the Generalised Distance in Statistics,” in Proceedings of the National Institute of Sciences of India (Vol. 2), pp. 49–55.
  • Markou, M., and Singh, S. (2003a), “Novelty Detection: A Review—Part 1: Statistical Approaches,” Signal Processing, 83, 2481–2497. DOI: 10.1016/j.sigpro.2003.07.018.
  • Markou, M., and Singh, S. (2003b), “Novelty Detection: A Review—Part 2: Neural Network based Approaches,” Signal Processing, 83, 2499–2521.
  • Masud, M. M., Gao, J., Khan, L., Han, J., and Thuraisingham, B. (2009), “Integrating Novel Class Detection with Classification for Concept-Drifting Data Streams,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2009, Bled, Slovenia, September 7-11, 2009, Proceedings, Part II 20, pp. 79–94, Springer.
  • Mejri, D., Limam, M., and Weihs, C. (2017), “Combination of Several Control Charts Based on Dynamic Ensemble Methods,” Mathematics and Statistics, 5, 117–129. DOI: 10.13189/ms.2017.050302.
  • Mejri, D., Limam, M., and Weihs, C. (2021), “A New Time Adjusting Control Limits Chart for Concept Drift Detection,” IFAC Journal of Systems and Control, 17, 100170. DOI: 10.1016/j.ifacsc.2021.100170.
  • Montgomery, D. C. (2020), Introduction to Statistical Quality Control, Hoboken, NJ: Wiley.
  • Moon, J., Kim, J., Shin, Y., and Hwang, S. (2020), “Confidence-Aware Learning for Deep Neural Networks,” in Proceedings of the 37th International Conference on Machine Learning (Vol. 119), pp. 7034–7044, PMLR.
  • Mosler, K. (2013), “Depth Statistics,” in Robustness and Complex Data Structures: Festschrift in Honour of Ursula Gather, eds. C. Becker, R. Fried, S. Kuhnt, pp. 17–34, Berlin: Springer.
  • Mosler, K., and Mozharovskyi, P. (2022), “Choosing Among Notions of Multivariate Depth Statistics,” Statistical Science, 37, 348–368. DOI: 10.1214/21-STS827.
  • Mozharovskyi, P. (2022), “Anomaly Detection Using Data Depth: Multivariate Case,” arXiv preprint arXiv:2210.02851.
  • Nishida, K., and Yamauchi, K. (2007), “Detecting Concept Drift Using Statistical Testing,” in Discovery Science (Vol. 4755), eds. V. Corruble, M. Takeda, and E. Suzuki, pp. 264–269, Berlin: Springer.
  • O’Shea, K., and Nash, R. (2015), “An Introduction to Convolutional Neural Networks,” arXiv preprint arXiv:1511.08458.
  • Otter, D. W., Medina, J. R., and Kalita, J. K. (2020), “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, 32, 604–624. DOI: 10.1109/TNNLS.2020.2979670.
  • Pandolfo, G., Iorio, C., Staiano, M., Aria, M., and Siciliano, R. (2021), “Multivariate Process Control Charts based on the Lp Depth,” Applied Stochastic Models in Business and Industry, 37, 229–250. DOI: 10.1002/asmb.2616.
  • Parekh, J., Mozharovskyi, P., and d’ Alché-Buc, F. (2021), “A Framework to Learn with Interpretation,” in Advances in Neural Information Processing Systems (Vol. 34), eds. M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, pp. 24273–24285, Red Hook, NY: Curran Associates, Inc.
  • Parzen, E. (1962), “On Estimation of a Probability Density Function and Mode,” The Annals of Mathematical Statistics, 33, 1065–1076. DOI: 10.1214/aoms/1177704472.
  • Pearce, T., Brintrup, A., and Zhu, J. (2021), “Understanding Softmax Confidence and Uncertainty,” arXiv preprint arXiv:2106.04972.
  • Perdikis, T., and Psarakis, S. (2019), “A Survey on Multivariate Adaptive Control Charts: Recent Developments and Extensions,” Quality and Reliability Engineering International, 35, 1342–1362. DOI: 10.1002/qre.2521.
  • Piano, L., Garcea, F., Gatteschi, V., Lamberti, F., and Morra, L. (2022), “Detecting Drift in Deep Learning: A Methodology Primer,” IT Professional, 24, 53–60. DOI: 10.1109/MITP.2022.3191318.
  • Pidhorskyi, S., Almohsen, R., and Doretto, G. (2018), “Generative Probabilistic Novelty Detection with Adversarial Autoencoders,” in Advances in Neural Information Processing Systems (Vol. 31).
  • Pokotylo, O., Mozharovskyi, P., and Dyckerhoff, R. (2019), “Depth and Depth-Based Classification with R Package ddalpha,” Journal of Statistical Software, 91, 1–46. DOI: 10.18637/jss.v091.i05.
  • Psarakis, S. (2011), “The Use of Neural Networks in Statistical Process Control Charts,” Quality and Reliability Engineering International, 27, 641–650. DOI: 10.1002/qre.1227.
  • Psarakis, S. (2015), “Adaptive Control Charts: Recent Developments and Extensions,” Quality and Reliability Engineering International, 31, 1265–1280.
  • Qiu, P. (2014), Introduction to Statistical Process Control, Boca Raton, FL: CRC Press.
  • Roberts, S., and Tarassenko, L. (1994), “A Probabilistic Resource Allocating Network for Novelty Detection,” Neural Computation, 6, 270–284. DOI: 10.1162/neco.1994.6.2.270.
  • Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., and Williamson, R. (2001), “Estimating the Support of a High-Dimensional Distribution,” Neural Computation, 13, 1443–1471. DOI: 10.1162/089976601750264965.
  • Schubert, E., Zimek, A., and Kriegel, H.-P. (2014), “Generalized Outlier Detection with Flexible Kernel Density Estimates,” in Proceedings of the 2014 SIAM International Conference on Data Mining, pp. 542–550, SIAM. DOI: 10.1137/1.9781611973440.63.
  • Sergin, N. D., and Yan, H. (2021), “Toward a Better Monitoring Statistic for Profile Monitoring via Variational Autoencoders,” Journal of Quality Technology, 53, 1–46. DOI: 10.1080/00224065.2021.1903821.
  • Shrestha, A., and Mahmood, A. (2019), “Review of Deep Learning Algorithms and Architectures,” IEEE Access, 7, 53040–53065. DOI: 10.1109/ACCESS.2019.2912200.
  • Smith, L. N. (2018), “A Disciplined Approach to Neural Network Hyper-Parameters: Part 1–learning Rate, Batch Size, Momentum, and Weight Decay,” arXiv preprint arXiv:1803.09820.
  • Stoumbos, Z. G., Jones, L. A., Woodall, W. H., and Reynolds, M. R. (2001), “On Nonparametric Multivariate Control Charts based on Data Depth,” in Frontiers in Statistical Quality Control 6, eds. H.-J. Lenz, P.-T. Wilrich, pp. 207–227, Heidelberg: Springer.
  • Struyf, A. J., and Rousseeuw, P. J. (1999), “Halfspace Depth and Regression Depth Characterize the Empirical Distribution,” Journal of Multivariate Analysis, 69, 135–153. DOI: 10.1006/jmva.1998.1804.
  • Sun, Y., Ming, Y., Zhu, X., and Li, Y. (2022), “Out-of-Distribution Detection with Deep Nearest Neighbors,” in International Conference on Machine Learning, pp. 20827–20840. PMLR.
  • Tukey, J. W. (1975), “Mathematics and the Picturing of Data,” in Proceedings of the International Congress of Mathematicians, Vancouver, 1975 (Vol. 2), pp. 523–531.
  • Vakayil, A., and Joseph, V. R. (2022), “Data Twinning,” Statistical Analysis and Data Mining: The ASA Data Science Journal, 15, 598–610. DOI: 10.1002/sam.11574.
  • Vencálek, O. (2017), “Depth-based Classification for Multivariate Data,” Austrian Journal of Statistics, 46, 117–128. DOI: 10.17713/ajs.v46i3-4.677.
  • Villa-Pérez, M. E., Alvarez-Carmona, M. A., Loyola-Gonzalez, O., Medina-Pérez, M. A., Velazco-Rossell, J. C., and Choo, K.-K. R. (2021), “Semi-Supervised Anomaly Detection Algorithms: A Comparative Summary and Future Research Directions,” Knowledge-Based Systems, 218, 106878. DOI: 10.1016/j.knosys.2021.106878.
  • Wang, H., and Xia, Y. (2008), “Sliced Regression for Dimension Reduction,” Journal of the American Statistical Association, 103, 811–821. DOI: 10.1198/016214508000000418.
  • Wang, X., Wang, Z., Shao, W., Jia, C., and Li, X. (2019), “Explaining Concept Drift of Deep Learning Models,” in International Symposium on Cyberspace Safety and Security, pp. 524–534, Springer.
  • Weese, M., Martinez, W., Megahed, F. M., and Jones-Farmer, L. A. (2016), “Statistical Learning Methods Applied to Process Monitoring: An Overview and Perspective,” Journal of Quality Technology, 48, 4–24. DOI: 10.1080/00224065.2016.11918148.
  • Wu, H.-M. (2008), “Kernel Sliced Inverse Regression with Applications to Classification,” Journal of Computational and Graphical Statistics, 17, 590–610. DOI: 10.1198/106186008X345161.
  • Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., et al. (2022a), “OpenOOD: Benchmarking Generalized Out-of-Distribution Detection,” arXiv preprint arXiv:2210.07242.
  • Yang, J., Zhou, K., Li, Y., and Liu, Z. (2022b), “Generalized Out-of-Distribution Detection: A Survey,” arXiv preprint arXiv:2110.11334v2.
  • Yeganeh, A., Abbasi, S. A., Pourpanah, F., Shadman, A., Johannssen, A., and Chukhrova, N. (2022), “An Ensemble Neural Network Framework for Improving the Detection Ability of a Base Control Chart in Non-parametric Profile Monitoring,” Expert Systems with Applications, 204, 117572. DOI: 10.1016/j.eswa.2022.117572.
  • Yeung, D.-Y., and Chow, C. (2002), “Parzen-Window Network Intrusion Detectors,” in 2002 International Conference on Pattern Recognition (Vol. 4), pp. 385–388, IEEE.
  • Yeung, D.-Y., and Ding, Y. (2003), “Host-based Intrusion Detection Using Dynamic and Static Behavioral Models,” Pattern Recognition, 36, 229–243. DOI: 10.1016/S0031-3203(02)00026-2.
  • Yu, F., Wang, D., Shelhamer, E., and Darrell, T. (2018), “Deep Layer Aggregation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2403–2412.
  • Zan, T., Liu, Z., Su, Z., Wang, M., Gao, X., and Chen, D. (2020), “Statistical Process Control with Intelligence based on the Deep Learning Model,” Applied Sciences, 10, 308. DOI: 10.3390/app10010308.
  • Zhang, K., Bui, A. T., and Apley, D. W. (2023), “Concept Drift Monitoring and Diagnostics of Supervised Learning Models via Score Vectors,” Technometrics, 65, 137–149. DOI: 10.1080/00401706.2022.2124310.
  • Žliobaitė, I., Pechenizkiy, M., and Gama, J. (2016), “An Overview of Concept Drift Applications,” in Big Data Analysis: New Algorithms for a New Society, eds. N. Japkowicz, and J. Stefanowski, 91–114, Cham: Springer.
  • Zou, C., Wang, Z., and Tsung, F. (2012), “A Spatial Rank-based Multivariate EWMA Control Chart,” Naval Research Logistics (NRL), 59, 91–110. DOI: 10.1002/nav.21475.
  • Zuo, Y., and Serfling, R. (2000), “General Notions of Statistical Depth Function,” Annals of Statistics, 28, 461–482.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.