107
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Deep reinforcement learning for adaptive flexible job shop scheduling: coping with variability and uncertainty

ORCID Icon, &
Pages 387-405 | Received 10 Nov 2023, Accepted 15 Apr 2024, Published online: 03 May 2024

References

  • Wang B, Tao F, Fang X, et al. Smart manufacturing and intelligent manufacturing: a comparative review. Eng. 2021;7(6):738–757. doi: 10.1016/j.eng.2020.07.017
  • Mittal S, Khan MA, Romero D, et al. Smart manufacturing: characteristics, technologies and enabling factors. Proc Inst Mech Eng Part B. 2019;233(5):1342–1361. doi: 10.1177/0954405417736547
  • Barenji AV, Liu X, Guo H, et al. A digital twin-driven approach towards smart manufacturing: reduced energy consumption for a robotic cellular. Int J Comput Integrat Manufact. 2020;34(7–8):844–859. doi: 10.1080/0951192X.2020.1775297
  • Jung K, Choi S, Kulvatunyou B, et al. A reference activity model for smart factory design and improvement. Prod Plann Control. 2017;28(2):108–122. doi: 10.1080/09537287.2016.1237686
  • Bahrpeyma F, Reichelt D. A review of the applications of multi-agent reinforcement learning in smart factories. Front Rob AI. 2022;9:1027340. doi: 10.3389/frobt.2022.1027340
  • Workneh AD, Gmira M. Scheduling algorithms: challenges towards smart manufacturing. Int J Electric Comput Engg Systs. 2022;13(7):587–600. doi: 10.32985/ijeces.13.7.11
  • Workneh AD, Gmira M. Learning to schedule (l2s): adaptive job shop scheduling using double deep q network. Smart Sci. 2023;11(3):409–423. doi: 10.1080/23080477.2023.2187528
  • Han B, Yang J. A deep reinforcement learning based solution for flexible job shop scheduling problem. Int J Simul Model. 2021;20(2):375–386. doi: 10.2507/IJSIMM20-2-CO7
  • Liu R, Piplani R, Toro C. Deep reinforcement learning for dynamic scheduling of a flexible job shop. Int J P Res. 2022;60(13):4049–4069. doi: 10.1080/00207543.2022.2058432
  • Chaudhry IA, Khan AA. A research survey: review of flexible job shop scheduling techniques. Int Trans Oper Res. 2016;23(3):551–591. doi: 10.1111/itor.12199
  • Gomes M, Barbosa-Povoa A, Novais A. Optimal scheduling for flexible job shop operation. Int J P Res. 2005;43(11):2323–2353. doi: 10.1080/00207540412331330101
  • Sonmez A, Baykasoglu A. A new dynamic programming formulation of (nxm) flowshop sequencing problems with due dates. Int J P Res. 1998;36(8):2269–2283. doi: 10.1080/002075498192896
  • Brucker P, Jurisch B, Sievers B. A branch and bound algorithm for the job-shop scheduling problem. Discrete Appl Math. 1994;49(1–3):107–127. doi: 10.1016/0166-218X(94)90204-6
  • Workneh AD, Gmira M Deep q network method for dynamic job shop scheduling problem. In: International Conference on Artificial Intelligence & Industrial Applications; Meknes, Morocco. Springer; 2023. p. 137–155.
  • Nouri HE, Belkahla Driss O, Ghédira K. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model. J Ind Eng Int. 2018;14(1):1–14. doi: 10.1007/s40092-017-0204-z
  • Bożejko W, Uchroński M, Wodecki M. The new golf neighborhood for the exible job shop problem. Procedia Comput Sci. 2010;1(1):289–296. doi: 10.1016/j.procs.2010.04.032
  • Loukil T, Teghem J, Fortemps P. A multi-objective production scheduling case study solved by simulated annealing. Eur J Oper Res. 2007;179(3):709–722. doi: 10.1016/j.ejor.2005.03.073
  • Fattahi P, Jolai F, Arkat J. Flexible job shop scheduling with overlapping in operations. Appl Math Modell. 2009;33(7):3076–3087. doi: 10.1016/j.apm.2008.10.029
  • Palacio JC, Jiménez YM, Schietgat L, et al. A q-learning algorithm for flexible job shop scheduling in a real-world manufacturing scenario. Procedia CIRP. 2022;106:227–232. doi: 10.1016/j.procir.2022.02.183
  • Shiue YR, Lee KC, Su CT. Real-time scheduling for a smart factory using a reinforcement learning approach. Comput Ind Eng. 2018;125:604–614. doi: 10.1016/j.cie.2018.03.039
  • Sutton RS, Barto AG. Reinforcement learning: an introduction. Cambridge, MA: MIT press; 2018.
  • Liu CL, Chang CC, Tseng CJ. Actor-critic deep reinforcement learning for solving job shop scheduling problems. IEEE Access. 2020;8:71752–71762. doi: 10.1109/ACCESS.2020.2987820
  • Luo S. Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning. Appl Soft Comput. 2020;91:106208. doi: 10.1016/j.asoc.2020.106208
  • Luo S, Zhang L, Fan Y. Dynamic multi-objective scheduling for flexible job shop by deep reinforcement learning. Comput Ind Eng. 2021;159:107489. doi: 10.1016/j.cie.2021.107489
  • Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature. 2015;518(7540):529–533. doi: 10.1038/nature14236
  • Lin CC, Deng DJ, Chih YL, et al. Smart manufacturing scheduling with edge computing using multiclass deep q network. IEEE Trans Ind Inform. 2019;15(7):4276–4284. doi: 10.1109/TII.2019.2908210
  • Wang YF. Adaptive job shop scheduling strategy based on weighted q-learning algorithm. J Intell Manuf. 2020;31(2):417–432. doi: 10.1007/s10845-018-1454-3
  • Waschneck B, Reichstaller A, Belzner L, et al. Optimization of global production scheduling with deep reinforcement learning. Procedia CIRP. 2018;72:1264–1269. doi: 10.1016/j.procir.2018.03.212
  • Qu S, Wang J, Jasperneite J. Dynamic scheduling in modern processing systems using expert-guided distributed reinforcement learning. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA); Zaragoza, Spain. IEEE; 2019. p. 459–466.
  • Hubbs CD, Li C, Sahinidis NV, et al. A deep reinforcement learning approach for chemical production scheduling. Comput Chem Eng. 2020;141:106982. doi: 10.1016/j.compchemeng.2020.106982
  • Oh SH, Cho YI, Woo JH. Distributional reinforcement learning with the independent learners for flexible job shop scheduling problem with high variability. J Comput Design Eng. 2022;9(4):1157–1174. doi: 10.1093/jcde/qwac044
  • Lei K, Guo P, Wang Y, et al. Large-scale dynamic scheduling for flexible job-shop with random arrivals of new jobs by hierarchical reinforcement learning. IEEE Trans. Ind. Inform. 2023;20(1):1007–1018. doi:10.1109/TII.2023.3272661
  • Zhang JD, He Z, Chan WH, et al. Deepmag: deep reinforcement learning with multi-agent graphs for flexible job shop scheduling. Knowledge-Based Syst. 2023;259:110083. doi: 10.1016/j.knosys.2022.110083
  • Johnson D, Chen G, Lu Y. Multi-agent reinforcement learning for real-time dynamic production scheduling in a robot assembly cell. IEEE Rob Autom Lett. 2022;7(3):7684–7691. doi: 10.1109/LRA.2022.3184795
  • Du Y, Li J, Chen X, et al. Knowledge-based reinforcement learning and estimation of distribution algorithm for flexible job shop scheduling problem. IEEE Trans Emerg Top Comput Intell. 2022;7(4):1036–1050. doi: 10.1109/TETCI.2022.3145706
  • Du Y, Li J, Li C, et al. A reinforcement learning approach for flexible job shop scheduling problem with crane transportation and setup times. IEEE Trans Neural Net Learn Syst. 2022;35(4):5695–5709. doi:10.1109/TNNLS.2022.3208942
  • Song W, Chen X, Li Q, et al. Flexible job-shop scheduling via graph neural network and deep reinforcement learning. IEEE Trans Ind Inform. 2022;19(2):1600–1610. doi: 10.1109/TII.2022.3189725
  • Luo S, Zhang L, Fan Y. Real-time scheduling for dynamic partial-no-wait multiobjective flexible job shop by deep reinforcement learning. IEEE Trans Autom Sci Eng. 2022;19(4):3020–3038. doi: 10.1109/TASE.2021.3104716
  • Thrun S, Schwartz A Issues in using function approximation for reinforcement learning. In: Proceedings of the 1993 connectionist models summer school; Hillsdale, NJ. Psychology Press; 2014. p. 255–263.
  • Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double q-learning. Proce AAAI Conf Artif Intell. 2016;30(1). doi: 10.1609/aaai.v30i1.10295
  • He Y, Wu G, Chen Y, et al. A two-stage framework and reinforcement learning-based optimization algorithms for complex scheduling problems. arXiv preprint. 2021;arXiv:2103.05847.
  • Hasselt H. Double q-learning. Adv Neural Inf Process Syst. 2010;23:2613–2621.
  • Lan Q, Pan Y, Fyshe A, et al. Maxmin q-learning: controlling the estimation bias of q-learning. arXiv preprint. 2020;arXiv:2002.06487.
  • Anschel O, Baram N, Shimkin N. Averaged-dqn: variance reduction and stabilization for deep reinforcement learning. In: International conference on machine learning; Sydney, NSW, Australia. PMLR; 2017. p. 176–185.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.