合肥工业大学校徽 合肥工业大学学报自科版

导航菜单

基于 DRL 的大规模定制装配车间调度研究

Research on assembly shop scheduling for mass customization based on DRL

期刊信息

合肥工业大学(自然科学版),2025年7月,第48卷第7期:878-883

DOI: 10.3969/j.issn.1003-5060.2025.07.003

作者信息

屈新怀,张慧慧,丁必荣,孟冠军

(合肥工业大学机械工程学院,安徽合肥230009)

摘要和关键词

摘要: 针对大规模定制装配车间中订单的随机性和偶然性问题, 文章提出一种基于深度强化学习(deep reinforcement learning, DRL)的大规模定制装配车间作业调度优化方法。建立以最小化产品组件更换次数和最小化订单提前/拖期惩罚为目标的大规模定制装配车间作业调度优化模型, 基于调度模型建立马尔科夫决策过程, 合理定义状态、动作和奖励函数; 将调度模型优化问题与 DRL 方法相结合, 并采用改进的 D3QN 算法进行模型求解; 最后进行仿真实验验证。结果表明, 文章所提方法能有效减少产品组件更换次数和降低订单提前/拖期惩罚。

关键词: 大规模定制;装配车间;深度强化学习(DRL);车间作业调度;调度优化模型

Authors

QU Xinhuai, ZHANG Huihui, DING Birong, MENG Guanjun

(School of Mechanical Engineering, Hefei University of Technology, Hefei 230009, China)

Abstract and Keywords

Abstract: Aiming at the randomness and contingency of orders in the assembly shop for mass customization, this paper proposes an assembly job shop scheduling optimization method based on deep reinforcement learning (DRL). Firstly, an assembly job shop scheduling optimization model is established to minimize the number of product component replacements and the penalties for order earliness or tardiness. Then, based on the scheduling model, a Markov decision process is established, and the state, action and reward functions are reasonably defined. The optimization problem of the scheduling model is solved by connecting it with the DRL method, and an improved D3QN algorithm is selected to solve the model. Finally, the simulation experiment is conducted, and the results show that the method effectively reduces the number of product component replacements and penalties for early or delayed orders.

Keywords: mass customization; assembly shop; deep reinforcement learning(DRL); job shop scheduling; scheduling optimization model

基金信息

国家重点研发计划资助项目(2019YFB1705303)

个人中心