130
Views
1
CrossRef citations to date
0
Altmetric
Methods, Models, & Theories

An Image-Based Human-Robot Collision Avoidance Scheme: A Proof of Concept

, , , &
Pages 112-122 | Received 05 Dec 2022, Accepted 02 Jun 2023, Published online: 22 Jun 2023

References

  • Alhwarin, F., Ferrein, A., & Scholl, I. (2014). IR stereo kinect: Improving depth images by combining structured light with IR stereo. Pacific Rim International Conference on Artificial Intelligence, 409–421.
  • Bohannon, R. W., & Andrews, A. W. (2011). Normal walking speed: A descriptive meta-analysis. Physiotherapy, 97(3), 182–189. https://doi.org/10.1016/j.physio.2010.12.004
  • Cao, Z., Hidalgo, G., Simon, T., Wei, S.-E., & Sheikh, Y. (2021). OpenPose: Realtime multi-person 2D pose estimation using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1), 172–186. https://doi.org/10.1109/TPAMI.2019.2929257
  • Cao, Z., Simon, T., Wei, S.-E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7291–7299.
  • Chen, J.-H., & Song, K.-T. (2018). Collision-free motion planning for human-robot collaborative safety under cartesian constraint [Paper presentation]. 2018 IEEE International Conference on Robotics and Automation (ICRA), 4348–4354. https://doi.org/10.1109/ICRA.2018.8460185
  • Du, G., Long, S., Li, F., & Huang, X. (2018). Active collision avoidance for human-robot interaction with ukf, expert system, and artificial potential field method. Frontiers in Robotics and AI, 5, 125. https://doi.org/10.3389/frobt.2018.00125
  • Fang, H.-S., Li, J., Tang, H., Xu, C., Zhu, H., Xiu, Y., Li, Y.-L., & Lu, C. (2023). Alphapose: Whole-body regional multi-person pose estimation and tracking in real-time. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6), 7157–7173. https://doi.org/10.1109/TPAMI.2022.3222784
  • Frid, E., Bresin, R., & Alexanderson, S. (2018). Perception of mechanical sounds inherent to expressive gestures of a NAO robot-implications for movement sonification of humanoids. Proceedings of the 15th Sound and Music Computing Conference. Limassol, Cyprus, 2018.
  • Furtado, J. S., Liu, H. H. T., Lai, G., Lacheray, H., & Desouza-Coelho, J. (2019). Comparative analysis of optitrack motion capture systems. In Advances in Motion Sensing and Control for Robotic Applications (pp. 15–31). Springer.
  • Girshick, Y. W. A. K. and F. M. and W.-Y. L. and R. (2019). Detectron2
  • Halme, R.-J., Lanz, M., Kämäräinen, J., Pieters, R., Latokartano, J., & Hietanen, A. (2018). Review of vision-based safety systems for human-robot collaboration. Procedia CIRP, 72, 111–116. https://doi.org/10.1016/j.procir.2018.03.043
  • Kendall, A., Grimes, M., & Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-dof camera relocalization. Proceedings of the IEEE International Conference on Computer Vision, 2938–2946.
  • Matthias, B., Kock, S., Jerregard, H., Kallman, M., Lundberg, I., & Mellander, R. (2011a). Safety of collaborative industrial robots: Certification possibilities for a collaborative assembly robot concept. 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM), 1–6.
  • Matthias, B., Kock, S., Jerregard, H., Kallman, M., Lundberg, I., & Mellander, R. (2011b). Safety of collaborative industrial robots: Certification possibilities for a collaborative assembly robot concept. 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM), 1–6.
  • Mendat, C. C., & Wogalter, M. S. (2006). WARNING CHANNEL: MODALITY AND MEDIA. Handbook of Warnings, 123.
  • Michalos, G., Makris, S., Spiliotopoulos, J., Misios, I., Tsarouchi, P., & Chryssolouris, G. (2014). ROBO-PARTNER: Seamless human-robot cooperation for intelligent, flexible and safe operations in the assembly factories of the future. Procedia CIRP, 23, 71–76. https://doi.org/10.1016/j.procir.2014.10.079
  • Mohammed, A., Schmidt, B., & Wang, L. (2017). Active collision avoidance for human–robot collaboration driven by vision sensors. International Journal of Computer Integrated Manufacturing, 30(9), 970–980. https://doi.org/10.1080/0951192X.2016.1268269
  • Mohammed, M., Jeong, H., & Lee, J. Y. (2021). Human-robot collision avoidance scheme for industrial settings based on injury classification. Companion of the 2021 [Paper presentation]. ACM/IEEE International Conference on Human-Robot Interaction, 549–551. https://doi.org/10.1145/3434074.3447232
  • Murashov, V., Hearl, F., & Howard, J. (2016). Working safely with robot workers: Recommendations for the new workplace. Journal of Occupational and Environmental Hygiene, 13(3), D61–D71. https://doi.org/10.1080/15459624.2015.1116700
  • NIOSH. (2017). NIOSH Presents: An Occupational Safety and Health Perspective on Robotics Applications in the Workplace. National Institute for Occupational Safety and Health.
  • Pavllo, D., Feichtenhofer, C., Grangier, D., & Auli, M. (2019). 3d human pose estimation in video with temporal convolutions and semi-supervised training. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7753–7762.
  • Plantard, P., Auvinet, E., Le Pierres, A. S., & Multon, F. (2015). Pose estimation with a kinect for ergonomic studies: Evaluation of the accuracy using a virtual mannequin. Sensors, 15(1), 1785–1803.
  • Polverini, M. P., Zanchettin, A. M., & Rocco, P. (2014). Real-time collision avoidance in human-robot interaction based on kinetostatic safety field. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. 4136–4141.
  • Schäffer, A. A., Eiberger, O., Grebenstein, M., Haddadin, S., Ott, C., Wimböck, T., Wolf, S., & Hirzinger, G. (2008). Soft robotics, from torque feedback-controlled lightweight robots to intrinsically compliant systems. IEEE Robotics & Automation Magazine, 15(3), 20–30. https://doi.org/10.1109/MRA.2008.927979
  • Schlegl, T., Kröger, T., Gaschler, A., Khatib, O., & Zangl, H. (2013). Virtual whiskers—Highly responsive robot collision avoidance. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 5373–5379.
  • Schmidt, B., & Wang, L. (2013). Contact-less and programming-less human-robot collaboration. Procedia CIRP, 7, 545–550. https://doi.org/10.1016/j.procir.2013.06.030
  • Sherwani, F., Asad, M. M., & Ibrahim, B. (2020). Collaborative robots and industrial revolution 4.0 (ir 4.0) [Paper presentation]. 2020 International Conference on Emerging Trends in Smart Technologies (ICETST), 1–5. https://doi.org/10.1109/ICETST49965.2020.9080724
  • Stanton, N. A. (1994). Human factors in alarm design. CRC Press.
  • Wang, L., Schmidt, B., & Nee, A. Y. C. (2013). Vision-guided active collision avoidance for human-robot collaborations. Manufacturing Letters, 1(1), 5–8. https://doi.org/10.1016/j.mfglet.2013.08.001
  • Xie, Z., Lu, L., Wang, H., Li, L., Fitts, E. P., & Xu, X. (2022). A human-robot collision avoidance method using a single camera. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 66(1), 2244–2248. https://doi.org/10.1177/1071181322661540
  • Zahray, L., Savery, R., Syrkett, L., & Weinberg, G. (2020). Robot Gesture Sonification to Enhance Awareness of Robot Status and Enjoyment of Interaction [Paper presentation]. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 978–985. https://doi.org/10.1109/RO-MAN47096.2020.9223452
  • Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330–1334. https://doi.org/10.1109/34.888718
  • Zhang, Z. (2012). Microsoft kinect sensor and its effect. IEEE Multimedia. 19(2), 4–10. https://doi.org/10.1109/MMUL.2012.24

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.