Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Distribution of Return Transition for Bohm-Vigier Stochastic Mechanics in Stock Market
Previous Article in Journal
A New Xgamma–Weibull Model Using Type-II Adaptive Progressive Hybrid Censoring and Its Applications in Engineering and Medicine
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Discrete Teaching–Learning-Based Optimization Algorithm to Solve No-Wait Flow-Shop-Scheduling Problem

1
Lanzhou Modern Vocational College, Lanzhou 730300, China
2
School of Computer and Communication, Lanzhou University of Technology, Lanzhou 730050, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(7), 1430; https://doi.org/10.3390/sym15071430
Submission received: 8 June 2023 / Revised: 3 July 2023 / Accepted: 10 July 2023 / Published: 17 July 2023
(This article belongs to the Section Computer)

Abstract

:
To address the problems of the single evolutionary approach, decreasing diversity, inhomogeneity, and meaningfulness in the destruction process when the teaching–learning-based optimization (TLBO) algorithm solves the no-wait flow-shop-scheduling problem, the multi-strategy discrete teaching–learning-based optimization algorithm (MSDTLBO) is introduced. Considering the differences between individuals, the algorithm is redefined from the student’s point of view, giving the basic integer sequence encoding. To address the fact that the algorithm is prone to falling into local optimum and to leading to a reduction in search accuracy, the population was divided into three groups according to the learning ability of the individuals, and different teaching strategies were adopted to achieve the effect of teaching according to their needs. To improve the destruction-and-reconstruction process with symmetry, an iterative greedy algorithm of destruction–reconstruction was used as the main body, and a knowledge base was used to control the number of meaningless artifacts to be destroyed and to dynamically change the artifact-selection method in the destruction process. Finally, the algorithm was applied to the no-wait flow-shop-scheduling problem (NWFSP) to test its practical application value. After comparing twenty-one benchmark test functions with six algorithms, the experimental results showed that the algorithm has a certain effectiveness in solving NWFSP.

1. Introduction

As the demand for production from manufacturing workshops increases, the problem of job scheduling is gradually developing towards practicality and diversification. Flow-shop scheduling is a classic scheduling model simplified from actual production lines, and it is at the core of operational research for advanced manufacturing systems, as well as management and optimization techniques. The problems it aims to solve can be divided into permutation flow-shop scheduling, no-wait flow-shop scheduling (NWFSP), and flow-shop scheduling with distributed constraints. The NWFSP is based on the basic flow-shop-scheduling problem, with the added elements that the processing sequence of the workpiece is the same on all the machines and that waiting-time constraints are not allowed between any two adjacent processes. Therefore, the corresponding NWFSP techniques are often used for the optimization of pharmaceutical, chemical, and steelmaking production and certain service systems’ processes, which are not allowed to be interrupted until all operations are completed due to the limitations of the process, the equipment, and other factors.
The use of NWFSP for optimization can improve the allocation of manufacturing resources and the production efficiency of enterprises effectively, and research on this topic has important theoretical value and practical significance. Therefore, this problem has attracted many experts’ and scholars’ attention, and many corresponding meta-heuristic algorithms have been proposed. For example, Chen et al. [1] proposed a particle-swarm optimizer with crossover operation. By performing crossover on the personal historical best position of each particle, the population maintains diversity, and the technique is applicable in NWFSP. Zhu et al. [2] proposed a new quantum-inspired cuckoo co-search algorithm, which generates initial solutions and searches for solutions through quantum solution formulation, quantum population evolution, and local neighborhood search to improve the quality of solutions. Zhao et al. [3] proposed a jigsaw-puzzle-inspired heuristic and constructed a waiting-time matrix to solve the NWFSP. Marichelvam et al. [4] proposed a hybrid-monkey-search algorithm and a multi-population strategy and used them for NWFSP. Zhang Qiliang et al. [5] proposed a dominant population-based discrete Drosophila algorithm, with redefined olfactory and visual search components. Liu Changping [6] proposed a discrete krill-swarm algorithm, which adopts different evolutionary strategies for individuals with different fitness levels by comparing individuals with food. In their research, Zuo et al. [7] focused on green scheduling and sustainable manufacturing. They looked into the energy-efficient hybrid flow-shop-scheduling problem (EHFSP) with a variable speed constraint and proposed a new multi-population artificial bee-colony algorithm (MPABC). The algorithm was designed to minimize the makespan, total tardiness, and total energy consumption (TEC) all at once. Yong Wang and colleagues introduced an innovative algorithm called the multi-objective cellular memetic optimization algorithm (MOCMOA) [8]. This algorithm combined the benefits of a cellular structure for exploring globally and variable neighborhood search (VNS) for exploiting locally. The MOCMOA was then put to the test through experiments and compared to other multi-objective optimization methods. Jidong Zhang et al. [9] presented a dual-population genetic algorithm combined with Q-learning. The aim was to address the distributed hybrid-flow-shop-scheduling problems that exhibit machine symmetries. The proposed approach effectively minimized the maximum completion time and reduced the number of tardy jobs. Others employed particle swarm optimization (PSO) [10], hybrid ant -colony optimization [11], migratory bird optimization [12], whale optimization [13], iterated local search algorithm [14], and other algorithms, and achieved certain results.
The teaching–learning-based optimization (TLBO) algorithm is a population-based optimization algorithm proposed by Rao et al. [15], which simulates the teaching process. Because of its simple operations, small number of parameters, and easy implementation, it is favored by many scholars, and this is a commonly used phrase. in various optimization problems. For example, Zhang et al. [16] presented a more effective approach to detecting faults. They implemented an enhanced teaching-learning-based optimization algorithm to refine the classification outcomes at the decision level. To promote diversity within the population and improve exploration, they introduced two new phases: the worst-recombination phase and the cuckoo-search phase. The worst-recombination phase involved updating the poorest solution through a crossover recombination operation to avoid premature convergence. The cuckoo-search phase was leveraged to evade local optima. Shao et al. [17] presented a hybrid discrete optimization algorithm for the NWFSP that utilized a teaching-probabilistic learning mechanism. To imitate the teaching process, forward- and backward-insert methods were employed, and a powerful probabilistic model was established. Subsequently, each learner interacted with the probabilistic model via the crossover operator to gain knowledge. Shao et al. [18] used a Gaussian distribution to improve the teaching phase of the TLBO algorithm and search for a more effective space; they also proposed a learning phase of the communication with crossover and a local search based on simulated annealing to solve the NWFSP. Du et al. [19] proposed a TLBO algorithm, which improved the overall quality of the population by performing insert operations on the students with the worst grades, and introduced adaptive teaching factors to improve the global search ability of the algorithm.
Generally, for the NWFSP, the swarm-intelligence-optimization algorithm is used for global searching and the iterative greedy (IG) algorithm and other trajectory-based algorithms are used for local searching. For example, Zhao et al. [20] proposed a factorial-based PSO with a population-adaptation mechanism, which improved the PSO algorithm using the actual scheduling problem, and proposed a variable-neighbor-search strategy based on the insert-and-swap neighborhood structure to perform a local search around the current best solution. Qu et al. [21] proposed the flower-pollination algorithm based on the hormone-modulation mechanism (HMM-FPA). In this approach, the hormone-modulation factor was introduced to strengthen information sharing among the flowers and improve FPA cross-pollination to enhance the algorithm’s global search performance. A variable neighborhood-search strategy based on dynamic self-adaptive variable workpiece blocks was formulated to improve the local search quality. Ding et al. [22] designed the taboo-mechanism-improved iterated greedy (TMIIG) algorithm, in which the IG algorithm was modified using a taboo-based reconstruction strategy to enhance its exploration ability. Next, a powerful neighborhood-search method, which involved insert, swap, and double-insert moves, was applied to obtain better solutions from the previous step. Nagano et al. [23] proposed a new IG algorithm, which used destruction and reconstruction to evolve the population and introduced a local search to determine the optimal solution. Zhao et al. [24] implemented hybrid biogeography-based optimization with a variable neighborhood-search mechanism. Here, a hybrid migration operator, which combines the path-relink technique and the block-based self-improvement strategy, was designed to accelerate the algorithm’s convergence speed. Deng et al. [25] proposed a population-based IG algorithm to solve the NWFSP with the total flow-time criterion as the optimization goal, and perturbed inferior solutions by destroying the reconstruction to improve the accuracy of the algorithm.
Although the algorithms described above have produced good optimization results, they still suffer from some shortcomings when applied on discrete problems. On the one hand, in swarm-intelligence algorithms, a single evolutionary process is adopted; that is, the differences between individuals are not considered and the same evolutionary strategy is adopted for all the individuals. On the other hand, even though the IG can be combined with many algorithms and produce good results, previous research did not study specific destruction processes. For example, in [25,26,27] only the most basic destruction processes were adopted. Subsequently, in the discrete water wave optimization (DWWO) algorithm [28], destruction and reconstruction were used as the propagation operators; although changes were made to the destruction parameters, the destruction process did not change. Similarly, in [5], the IG algorithm IG_F was proposed for segmentation destruction and was introduced in the olfactory search phase of Drosophila; however, the destruction process remained essentially the same.
Aiming to compensate for the above-mentioned adoption of unified evolutionary methods and basic destruction and reconstruction methods for local search, in this paper, a discrete TLBO algorithm is proposed for solving the NWFSP. First, the coding-and-decoding scheme is analyzed and the differences between the individuals are considered. The TLBO algorithm is redefined from the perspective of the students and a multi-stage teaching-and-learning strategy is suggested. In the proposed approach, the students are taught in accordance with their aptitude and can learn from each other through the three strategies of mutual benefit, assistance, and supplementary lessons, depending on their different aptitude levels. Second, taking the destruction and reconstruction of the IG algorithm as a local search operation, the destruction process is analyzed from the perspectives of inhomogeneity and meaningfulness. A knowledge base is established according to the search experience of the dominant groups, and a knowledge-validity period is set to eliminate early experiences. Through these two operations, the meaningfulness of the destruction process is increased, inhomogeneity is controlled, and the solution accuracy of the algorithm is improved.
This paper is organized as follows. Section 2 introduces the basics of NWFSP. Section 3 reviews and analyzes the disruption process. Section 4 introduces the basics of the TLBO algorithm. Section 5 describes our proposed multi-strategy discrete TLBO algorithm in detail. Section 6 reports the simulation experiments and analyzes the experimental results. Finally, the whole paper is summarized and future research directions are identified.

2. No-Wait Flow-Shop-Scheduling Problem

The NWFSP is a very important class of scheduling problems that are widely found in steelmaking [29], time-sensitive networks [22], industrial applications [30]. and manufacturing systems [31]. The NWFSPs with machine numbers greater than 2 have been shown to be strong NP problems.
The NWFSP can be described as follows. It is assumed that there are n workpieces to be processed on m machines, and that all the workpieces are machined in the same order on each machine. At any time, a workpiece can only be processed by one machine, and each machine can process only a single workpiece. The processing of each workpiece needs to take place without any waiting time between consecutive steps, which involves the “no-wait” constraint, i.e., that the start time of a job on a machine has to be delayed to ensure that there are no waiting times between the end of a job and the resumption of the workpiece by the next machine. Assuming that the preparation time is included, the processing time required for work-piece J (J = 1, 2, …, n) on machine k (k = 1, 2, …, m) is known, and the objective is to determine the optimal scheduling scheme based on a given performance index. The purpose of this paper is to determine the shortest scheduling sequence with a total completion time π that corresponds to the workpieces, n.
The ternary-form expression of the problem is F m | n o w a i t | C max , where F m indicates the flow-shop scheduling of the m machines; n o w a i t indicates that there are no interruptions in the work flow; and C max indicates that the scheduling optimization target is the total completion time. In the following, π = [ π ( 1 ) , π ( 2 ) , , π ( n ) ] represents a scheduling sequence, O j k corresponds to the processing operation of the workpiece j on the machine k , p j k represents the processing time required for O j k , and C ( j , k ) represents the completion time of operation O j k . Next, C ( j , k ) can be calculated using the following equations:
C ( j 1 , k ) = r = 1 k p j 1 , r k = 1 ,   ,   m
C ( j i , k ) = C ( j i 1 , 1 ) + r = 1 k p j i , r + Δ ( j i ) i = 2 ,   ,   n ; k = 1 ,   2 ,   m .
Δ ( j i ) 2 i n = max { max 2 k m [ C ( j i 1 , k ) C ( j i 1 , 1 ) r = 1 k 1 p j i , r ] , 0 }
C max ( π ) = C ( j n , m )
The completion time of job j on machine k is calculated using (2). Due to the no-waiting constraint, the necessary start-time delay between the two workpieces is calculated using (3). The total completion time is calculated using (4).

3. Analysis of the Destroying Process

The IG algorithm is simple in structure and easy to implement, and the time required for its development is short. It can provide fast and high-quality solutions for minimum- and maximum-completion-time problems in NWFSP. Therefore, in this paper, the destruction and reconstruction operations of the IG algorithm are adopted as the local search method. However, through the in-depth study of this algorithm, it was determined that the existing workpiece-destruction process was completely random, which creates the problems of non-uniformity and meaninglessness.
Destruction non-uniformity: Due to the limitations of the iterations and the running time of the experiment, the destruction times of each workpiece always differed under the condition of a certain total destruction time. That is, random destruction results in non-uniformity in the sample space.
An indicative experiment was conducted to demonstrate the existence of non-uniformity. Considering that the destruction result cannot be affected by the number of machines, workpieces of various sizes were tested for 20 times. The experimental destruction and reconstruction processes were performed 1000, 2000, and 5000 times respectively. The destruction times of each workpiece were counted when d = 5, and the mean values of the standard deviations (MS) and the mean range (MR) were calculated. The specific results are shown in Table 1.
It can be seen from Table 1 that under the same destruction times, the non-uniformity decreased with the increase in the scales. However, when the scales remained the same, the non-uniformity increased with the increase in the destruction times.
Meaningless destruction: The destruction process is divided into meaningful and meaningless destruction. Meaningless destruction occurs when removing the current workpiece from the sequence does not improve the current solution. Even after each meaningless destruction and reconstruction of the iteration, the maximum completion time is longer than that of the previous iteration. That is, the more frequently a meaningless destruction is performed, the worse the effect of the workpiece on the total completion time.
Figure 1 shows the destruction-and-reconstruction process of the Car7 test sample. In the left half of the figure, the initial sequence is {5, 2, 6, 0, 4, 1, 3}, the initial completion time is 9563, and it is assumed that the two randomly destroyed workpieces are {2, 1}. Next, the reconstruction rule is as follows: take out the workpieces from the removed workpiece sequence in turn and traverse all its positions, so that the workpieces are placed in the minimum position with the maximum completion time. After the left half of the sequence was destroyed and reconstructed, a new sequence {1, 5, 6, 0, 2, 4, 3} was generated, and the completion time changed from 9563 to 8801, which indicates that the destructive process was meaningful. On the other hand, in the right half of the sequence, the two pieces randomly destroyed were {6, 5}. After the destruction and reconstruction, the sequence {3, 4, 5, 6, 2, 0, 1}, with a completion time of 7869, was replaced by sequence {3, 4, 5, 2, 6, 0, 1}, with a completion time of 7892. Instead of decreasing, the completion time actually increased. It can be concluded that if a workpiece is destroyed many times, as in the case of {6, 5} in the right half of the figure, and the completion time is not improved, meaningless destruction can be said to have occurred.
To summarize, the destruction process of the IG algorithm has two properties: non-uniformity and meaninglessness.

4. Teaching–Learning-Based Optimization

The TLBO algorithm is a population-based optimization algorithm proposed by Rao et al. in 2011 [32], which improves the ability of each individual through the simulation of a teaching-and-learning process. Unlike other evolutionary algorithms, the student is taken as the individual, and the optimal individual is selected as the teacher. The algorithm mainly includes two phases, namely the teaching phase, in which the optimal individual leads the rest of the individuals, and the phase in which mutual learning between students takes place. The teaching phase guides the population to gradually converge to the space where the optimal individual is located, while the learning phase bestows the population with a certain developmental ability to avoid the premature termination of the algorithm.
Classroom instruction is an integral part of the educational process and the primary means through which students learn knowledge. During classroom teaching, teachers impart knowledge to students in the class, and each student has different levels of acceptance of this knowledge. In general, schools examine students’ learning through examinations, the level of which reflects the students’ level, and the average grade of a class reflects its teacher’s teaching level. In the teaching process, teachers make different teaching plans according to the level of the student group, and students interact with other students in order to improve themselves. By simulating this process, the complete teaching process is compared to the evolution of an intelligent optimization algorithm. The individual students are compared to the individuals in the population, and the student performance is compared to the fitness value of the individuals to build a complete optimization system, which is the teaching-optimization algorithm.

4.1. Teaching Phase

The teaching phase involves the updating of the solution based on the difference between the teacher’s level and the average grade of the student population. That is, during each iteration, the teacher attempts to move the average grade to their own level. This is expressed using the following equation:
X inew = X i + r a n d × ( X t e a c h e r T F × X m e a n )
X m e a n = 1 N p i = 1 N P X i
where r a n d is a random number between 0 and 1, X m e a n is the average grade of the student, and T F is the teaching factor, whose value is either 1 or 2, as shown in (7):
T F = r o u n d [ 1 + r a n d ( 0 , 1 ) ]
After each iteration, if f ( X i n e w ) < f ( X i ) , then a solution update takes place.

4.2. Learning Phase

The learning phase represents students’ learning through interaction; that is, each student’s state is updated according to the differences between randomly selected students and themselves. If a student has less knowledge than another, they can learn a new topic. For each student X i , another student X j ( i j ) is randomly selected and the updated process is as follows:
{ X i n e w = X i + r a n d × ( X i X j ) f ( X i ) < f ( X j ) X i n e w = X i + r a n d × ( X j X i ) f ( X j ) < f ( X i )
After each iteration, if f ( X i n e w ) < f ( X i ) , then the student’s state is updated.
The pseudo-code of the basic TLBO algorithm is shown in Algorithm 1.
Algorithm 1: Teaching–Learning Based Optimization
Input: population size NP, maximum number of iteration T max , dimension D .
Output: the global best X g b e s t
Initialize N P (number of learners) and D (dimension)
Initialize learners and evaluate them
W h i l e   ( i t e r < T max )   d o
   Choose the best learner as teacher
   Calculate the mean X m e a n of all learners
   for each learner X i
      //Teacher phase//
       T F = r o u n d ( 1 + r a n d ( 0 , 1 ) )
      Update the learner according to (5)
      Evaluate the new learner X i n e w
      Accept X i n e w if it is better than the old one X i
      //Learner phase//
      Randomly select another learner X i which is different from X i
      Update the learner according to (8)
      Evaluate the new learner X i n e w
      Accept X i n e w if it is better than X i
   End for
End while

4.3. Algorithm Implementation

The description of TLBO above shows that the main steps in TLBO include initialization, the calculation of the class average, and the definition of individual teachers, teaching phases, learning phases, and the reception of solutions. The pseudo-code of the basic TLBO algorithm is shown in Algorithm 1.

5. Multi-Strategy Discrete Teaching–Learning-Based Optimization Algorithm

5.1. Coding and Initialization

Although the SPV rules can be used to map some points from a sequence of N integers in a N-dimensional continuous space, there are still some problems in the conversion of the continuous positions into the discrete sequences. Let us assume that there are two completely different points, A and B, in the space, where A = {2.3, −0.6, 0.3, 1.9}, and B = {1.9, −1.7, 0.7, 1.2}. If the SPV rule is adopted here, both A and B map to the same workpiece sequence {2, 3, 4, 1}. This inefficient mapping method leads to a decrease in the convergence speed of the TLBO algorithm. In order to solve this problem, the basic integer-sequence-coding method is adopted, which has the advantages of simplicity, high efficiency, and ease of understanding.
In the discrete TLBO algorithm, each individual student (i.e., the processing machine) is composed of a sequence of n unique workpieces, whose order represents the processing sequence. For example, the processing sequences corresponding to the individuals {1, 3, 4, 2, 5} are workpiece 1, workpiece 3, workpiece 4, workpiece 2, and workpiece 5, respectively. This coding method is convenient to calculate the fitness of the scheduling solution corresponding to the individual (that is, the completion time). In order to take into account both the quality and the diversity of the solution, in this paper, the initialization-solution method proposed by Zhang et al. [33] is adopted, in which the first solution is generated by the NEH rule, and the remaining solutions are generated by mutating the NEH solutions through a permutation-variation method. In this method, the set of mutation bits is {5, 10, 15}, and the mutation bits are selected randomly from the set. The permutation mutation (PM) is shown in Figure 2.

5.2. Individual Learning Ability

Students of different levels exhibit different behavioral characteristics in the teaching and the mutual-learning phases. Therefore, in this paper, the individuals’ learning ability is represented by the difference bits (DBs) of the workpiece ranking between the individual and the teacher, defined as follows:
D B = i = 1 n s i g n ( | π t e a c h e r , i π s t u d e n t , i | )
where s i g n ( ) is the signum function. This equation represents the number of discrete-sequence DBs between a student and the teacher. In relative terms, a smaller number of DBs corresponds to a smaller difference between students and teachers; that is, higher student-learning ability.

5.3. Teaching Phase

In the teaching phase, the traditional TLBO algorithm considers the difference between the individual teacher and the average grade of the class from the perspective of the whole class and the teacher’s ability, but ignores the difference between the individual students; that is, the fact that students with different levels of learning ability are different. Based on these points, in this paper, the population is divided into three groups, depending on the individuals’ learning ability. The first group (denoted as F) represents the students with the highest learning ability and constitutes 20% of the total number of students. The group of students with medium learning ability comprises the middle 60% and is denoted as M, while the last 20%, denoted as L, form the group of students with the lowest learning ability. Individual teachers adopt different teaching strategies for each group of students in accordance with their aptitude. The specific process is as follows:
x i t + 1 = { P M ( x i t ) x i F   a n d   D B = 0 T P 1   or   T P 2 ( x i t x t e a c h e r ) x i M P M ( x t e a c h e r ) x i L
(1)
For the optimization group (group F) of the students, individuals who are different from teachers are retained. When DB = 0, the sequence of workpieces corresponding to the individual and the teacher is the same. In order to avoid meaningless teaching processes, based on the idea of elite deduplication, a PM method is used to generate new individuals, in which inferior solutions are allowed.
(2)
For student group M, two-point inner crossing or two-point outer crossing is adopted to convey the teacher’s knowledge. The two-point and outer-crossing techniques are shown in Figure 3 and Figure 4, respectively.
(3)
For the weakest group of student individuals, L, re-initialization is performed, the purpose and effect of which are similar to those in the population-reconstruction strategy.
It was verified experimentally that all the PMU methods in the teaching phase achieved the optimal effect when the number of variation bits was set to 5.

5.4. Mutual-Learning Phase

In order to improve the efficiency of mutual learning, it is necessary to introduce some changes compared to the traditional approaches. Considering that the evolution of individuals always proceeds in the direction of improvement, that is, students always learn from better students, in this paper, different strategies are adopted so that students learn from each other according to their different learning abilities. The improved methods are as follows:
x i t + 1 = { x i t x j t   a n d   x j t x i t x F   a n d   i j x i t x k t x i M   a n d   x k F x i t x t e a c h e r x i L
Similar to the improved teaching phase, students are divided into groups: F , M , L . Here, ⊗ denotes the random selection of either of the two points in the inner or outer crossing.
(1)
For each F i , a non-repetitive individual F j is chosen for mutually beneficial learning; that is, F i F j and F j F i ;
(2)
For each M i , an individual F k is derived from F for assistance learning, which is one-way learning M i F k ;
(3)
For the weakest group of individuals, L , the teacher’s remedial behavior for the weakest students after class is simulated, and remedial learning L i X t e a c h e r is carried out to accelerate the weakest individuals’ convergence.
After each mutual-learning event, if f ( X n e w ) < f ( X o l d ) , then it is updated.

5.5. Local Search

The local-search method directly determines the optimization direction and the degree of solution optimization. Aiming at the NWFSP, in this paper, the destruction–reconstruction method of the IG algorithm is used as a local-search algorithm. At the same time, in order to avoid non-uniformity and meaningless solutions, the history of the destruction-and-reconstruction process is collected in a knowledge base, and relevant information is filtered to further improve the destruction process. Finally, considering the continuous iteration of the sequence, relevant parameters are established to ensure the validity of the knowledge. The specific improvements are as follows:
(1)
In the iterative process, there are some workpieces whose position changes have little effect on improving the completion time. By reducing the number of such workpieces’ destructions, the inhomogeneity of the destruction process is reduced and more meaningful destructions take place.
(2)
The performance of a local search for all the individuals in the population consumes substantial computational time. Since the evolutionary process utilizes the students in group F as the dominant group in the population, only a local search on the students in group F is performed. During the search process, a knowledge base is established by integrating the destruction-and-reconstruction experience of group F’s students. The students obtain information through the knowledge base, adjust their own destruction process, and determine whether the workpiece destruction is meaningful after each local search. The evaluation results are then fed back to create new knowledge for the knowledge base.
(3)
Knowledge in different periods is generated from different sequences. In order to avoid the impact of useless early-phase knowledge on the later phase’s destruction process, the historical knowledge from the early phases is gradually eliminated during the iterative process, and only the knowledge generated by the T most recent iterations is retained. The parameter T is called the validity period of knowledge.
Knowledge-base establishment and updates: Knowledge refers to the workpiece and its corresponding meaningless-destruction occasions. It can be expressed using a key-value structure, in which the key corresponds to the workpiece and the value to the number of meaningless destructions. The update rules are as follows: after each excellent individual in group F is destroyed and reconstructed, if the result is C max ( π ) C max ( π ) , then the number of meaningless destructions corresponding to the workpiece increases by one; otherwise, it is not processed. The elimination rule is as follows: in each iteration, when the knowledge exceeds the validity period, the earliest knowledge is eliminated and the number of meaningless destructions corresponding to the workpiece in the knowledge base is reduced by one.
The complete destruction-and-reconstruction process is presented below. Before they are destroyed, each excellent individual in group F first obtains information from the knowledge base and arranges the workpieces according to the number of meaningless destructions. Next, 25% of the workpieces with the fewest meaningless destruction times are retained and another d workpieces from the screened workpieces are selected for destruction. At this point, the complete workpiece sequence π is divided into two parts, where π R comprises the d removed workpieces and π D are the remaining n d workpieces. The reconstruction process is performed until π D is empty and involves the extraction of the workpieces in π R in order, the testing of all the positions in the partial solution π D , and the insertion of the workpieces into the position that produces the minimum completion time. After the reconstruction, if the result is C max ( π ) < C max ( π ) , then the new solution is retained; otherwise, it is discarded and the original solution is retained. The new information is fed back to the knowledge base according to the update rules.
The complete local-search process is represented using the pseudo-code in Algorithm 2.
Algorithm 2: Procedure for Local Search
Establish a knowledge base(KB) in the form of key-value
Creat List record the meaningless destruction job of T generation
W h i l e ( i t e r < T max )   d o
   Creat subList record current generation
   For each student in group F
      Sort n jobs according to value in increasing order
      Select the first 1/4 jobs with the minimum value
      πD:=Destruction(selected jobs)
      π′:=Construction(πDR)
      If C max ( π ) < C max ( π )   then
         π = π′
      Else
         The value of all jobs in π R plus one in KB
         Add π R into subList
      End if
   End for
   Add subList into List
    I f ( i t e r > T )
      The value of all jobs in first subList minus one in KB
      Erasing the first subList in List
   End if
End while

5.6. Algorithm Flow

The process of the multi-strategy discrete TLBO (MSDTLBO) algorithm is as follows:
Step 1: Parameter initialization. The values of the control parameters N P (the population size), P U M (the number of variation bits), i t e r (the current iteration), T max (the maximum iteration), and T (the knowledge-base validity) are defined.
Step 2: Population initialization. Individuals are mapped into scheduling sequences through artifact encoding, and a feasible initial artifact sequence set π is generated using the NEH + PM initialization method proposed in Section V.A. The minimum makespan of the scheduling sequence corresponding to each individual is calculated.
Step 3: Population grouping. The population is divided into the F ,   M , and L groups based on the learning ability D B of the individuals calculated using (9).
Step 4: Teaching-and-learning phase. Using the discrete operations corresponding to (10) and (11), the teaching and learning phases are implemented, the population is updated, and the completion time of each new workpiece sequence is evaluated.
Step 5: Local-search phase. Using the improved IG local-search algorithm (see Algorithm 2), the individuals in group F are destroyed and reconstructed, and excellent individuals are obtained to replace those in F .
Step 6: The terminal condition is evaluated. If it is satisfied, the algorithm terminates and the optimal scheduling sequence is returned as the final solution. Otherwise, t = t + 1 , and the algorithm returns to step 3.
The pseudo-code for the MSDTLBO’s specific procedures is represented in Algorithm 3.
Algorithm 3: Multiple-Strategies Teaching–Learning-Based Optimization
Input: A set of all jobs π = { J 1 , J 2 , , J n }
Output: C max ( π b e s t ) and π b e s t corresponding job sequence
Initialize N P ,   P U M ,   t ,   T max ,   T
Initializing populations: Using an improved NEH method based on permutation variation to generate initialized job sequence collection.
Evaluate each candidate solution in the population and choose the best job sequence as x t e a c h e r
While (iter < Tmax) do
   Calculate the learning capacity(DB) of each individual
   Dividing the population into groups of three categories F, M and L
      //Teacher phase//
      Calculation using (10).
      //Learner phase//
      Calculation using (11).
   End for
   LocalSearch (Algorithm 2)
   Evaluate each candidate solution in the population and choose the best job sequence as x t e a c h e r ( π b e s t ).
   iter = iter + 1
End while

6. Simulation Experiment and Result Analysis

6.1. Complexity Analysis

The population number is defined as P , the maximum number of iterations as T max , the number of workpieces as n , the number of machines as m , the validity period of the knowledge base as T , and the number of destroyed workpieces as d .
(1)
Population initialization: this phase is performed only once, using NEH to generate one solution, and the mutation method based on permutation is used to generate the remaining P 1 solutions. The NEH method has the lowest complexity of O ( n 2 m ) , while the overall complexity is O ( n 2 m + P 1 ) .
(2)
Population grouping: the population is divided into the F , M , and L groups according to the D B value. The overall complexity is O ( T max n P 2 ) .
(3)
Teaching phase: the teaching strategy forteaching students according to their aptitude is adopted. Group F only performs permutation variation on teachers, so the complexity is O ( 1 ) , while the complexity of groups M and L is O ( 4 5 P ) . The overall complexity is O ( T max ( 4 5 P + 1 ) ) .
(4)
Mutual-learning phase: the multi-strategy mutual-learning method is adopted. The complexity of A learning from B is O ( 1 ) , so the overall complexity is O ( T max P ) .
(5)
Local-search phase: the complexity of the destroying process is O ( 1 ) , while that of the reconstruction process is O ( n d ) . Since the feedback and elimination of knowledge do not involve the calculation of the maximum completion time, their complexity is negligible. Since only members of group F perform the local search, the overall complexity is O ( P 5 T max ( d n + 1 ) ) .
In conclusion, the overall complexity of the MSDTLBO algorithm is O ( T max n p 2 ) .

6.2. Experimental Design and Evaluation Indicators

In order to verify the performance of the MSDTLBO algorithm, the NWFSP was used as the experimental case. The test dataset was chosen from the Rec benchmark-problem set [34] and the Ta benchmark-function-problem set, both of which are available in OR-LIBRARY. The simulation environment was a CentOS 8.2, 64-bit system, with a one-core CPU and 2G of RAM. The algorithm was implemented in Java. The parameter values were selected as follows: the population size was set to 40, the destruction phase d was set to 5, and the knowledge-validity period T was determined experimentally, as explained in the next sub-section. The average relative error (ARE), average relative deviation (ARD), and best relative deviation (BRD) were used as evaluation indicators to measure the quality of the experimental results. The calculation equations for the evaluation indices above were as follows:
A R E = C ¯ C * C * × 100 %
B R D = C min C * C * × 100
A R D = 1 R r = 1 R C r C * C * × 100
Specifically, C * is the known optimal solution, provided by Lin et al. [35]. In this study, each test case was independently executed 30 times for comparison. Therefore, the number of independent runs R was set to 30, C r represents the solution for the r - t h experiment out of 30 independent runs, C ¯ is the average of the solutions obtained from the R runs, C min is the best solution found in the R runs, and C min is the minimum value of C r .

6.3. Parameter Analysis

The local-search method directly determines the optimization direction and degree of optimization of the solution. In this paper, the destruction–reconstruction operation is used as a local-search method, and the knowledge base and knowledge-validity period are used to reduce the meaninglessness of the destruction process. The knowledge-validity period directly determines the impact of historical knowledge on the search results. If the validity period is overly long, the relatively meaningless knowledge generated in the early phases of the search is retained and may even hinder the evolution of the current sequence. If the validity period of knowledge is excessively short, when the sequence does not change, the effective knowledge generated by the sequence in the search process is discarded; that is, insufficient accumulation of experience may occur. To determine the optimal knowledge-validity period’s value, T was set to {20, 30, 40, 50}, and Rec41 (with 8437 as the optimal solution to calculate the ARE) was selected for the experiment. The maximum number of iterations was set to 1000, and the operation was performed 20 times. The experimental results are displayed in a horizontal value-trend graph, as shown in Figure 5.
As shown in Figure 5, when T = 30, the average relative error generated during 1000 iterations was the smallest, and better results were obtained compared to the other values. Therefore, T = 30 was selected as the optimal value to balance the elimination of previous knowledge and the accumulation of existing knowledge.

6.4. Local-Search Performance Comparison

To verify the impact of the local search on the overall performance of the algorithm, in this study, the performance of the MSDTLBO algorithm without local search (denoted as NDTLBO) was compared with that of the MSDTLBO itself. Here, Rec41, which is the largest-scale problem in the Rec problem set, was selected for the comparison experiments. In order to ensure the effectiveness of the comparison, each group of experiments was conducted 20 times, and the maximum number of iterations was set to 3000. The experimental results are shown in Figure 6. The mean value of the MSDTLBO algorithm was much lower than that of the NDTLBO, and the intervals of the two algorithms did not overlap, indicating that the two algorithms were significantly different. In addition, the overall interval of the MSDTLBO is also narrower than that of the NDTLBO, which shows that the DTLBO algorithm was more stable when local search was employed. Therefore, Figure 6 fully illustrates the effectiveness of the improved local search.

6.5. Algorithm Comparison

In the experiments, the Rec-benchmark problem and the Ta-benchmark problem were selected for testing. The Rec-benchmark problem set had a total of seven workpiece/machine (N × M) combinations, and each combination had three test examples. The Ta-benchmark-function-problem set had 12 different workpiece/machine (N × M) combinations, each with 10 test examples. The algorithm in this paper (MSDTLBO) was compared with advanced algorithms, such as the population adaptation PSO algorithm [20] (PAPSO), the flower-pollination algorithm based on HMM-FPA [21], the DWWO algorithm [25], the taboo-mechanism-improved iterated greedy algorithm [22] (TMIIG), the discrete PSO algorithm [10] (DPSOVND), and the improved iterated greedy algorithm [30] (IIGA). For the Rec instance, the experimental results are shown in Table 2, and the best results for each instance are given in bold. The BRD-and-ARD-comparison results are shown in Figure 7 and Figure 8 (since the BRD and ARD results for Rec1–17 were both 0, Figure 7 and Figure 8 only show the results after Rec19).
In Table 2, Figure 7 and Figure 8, it can be seen that most of the BRD values obtained using the MSDTLBO were better than or at least equal to those obtained using the other comparison algorithms. Although the BRD values obtained in the Rec37, 39, 41 problem sets were not optimal, the algorithm still showed a better performance. It can also be seen that the average BRD of the MSDTLBO was significantly better than that of the other comparison algorithms, except for HMM-FPA. Compared to the ARD, the MSDTLBO algorithm did not obtain optimal values on the Rec37 and 39 problem sets, while for the rest of the problem sets, its average ARD value was better than that of the other algorithms. These experimental results show the excellent performance of the MSDTLBO in the NWFSP-application use case. For example, when dealing with the problem set Rec41, although its BRD value was not optimal, its ARD value was optimal. These results fully demonstrate that the MSDTLBO algorithm is stable and effective in solving large problem sets.
The Gantt charts of Rec01 and Rec17 are shown in Figure 9 and Figure 10. It can be seen that the sequence corresponding to Rec01 was {5, 15, 13, 11, 1, 14, 12, 10, 6, 19, 3, 16, 0, 4, 9, 8, 7, 17, 2, 18}, and that the corresponding completion time was 1526. The corresponding sequence of Rec17 was {19, 11, 17, 1, 16, 12, 18, 3, 13, 6, 2, 9, 10, 7, 0, 5, 8, 15, 14, 4}, and the corresponding completion time was 2587. In addition, the MSDTLBO algorithm can also obtain the optimal solution for the large-scale Rec problem set. Considering the publication-space limitations for this paper, only some examples of scheduling sequences are provided. For example, the sequence of Rec29 was {14, 28, 24, 25, 1, 6, 10, 3, 22, 5, 9, 11, 29, 0, 21, 7, 8, 19, 15, 16, 13, 23, 12, 4, 26, 27, 2, 20, 17, 18}, with a completion time of 3291. The sequence of Rec33 was {46, 30, 2, 36, 24, 42, 31, 27, 40, 41, 4, 35, 6, 10, 25, 20, 7, 12, 47, 19, 48, 15, 14, 44, 13, 18, 23, 39, 26, 17, 43, 1, 21, 33, 38, 37, 8, 28, 45, 32, 49, 9, 0, 5, 29, 11, 34, 3, 16, 22}, with a completion time of 4424.
To further demonstrate the performance of the MSDTLBO in solving large-scale problems, the seven comparison algorithms were tested on Taillard instances. Table 3 summarizes the computational results of the seven algorithms compared, and the best result for each instance is given in bold.
In Table 4, it can be seen that the BRD value of the MSDTLBO was better than those of the PAPSO, HMM-FPA, DWWO, TMIIG, DPSOVND, and IIGA, both for each example and on average. At the same time, the average ARD value obtained using the MSDTLBO was smaller than that of the other advanced algorithms. Figure 11 shows a line graph of the ARD values for 12 different combinations of Taillard instances, from which it can be seen more clearly that in each n × m combination, the MSDTLBO’s ARD values outperformed those of the other algorithms for all the combination instances. The results show that the MSDTLBO exhibits a better performance when solving larger NWFSP instances.
At the same time, to prove that for large instances, the MSDTLBO also outperforms other algorithms under the same running times, the average convergence curves of the seven comparison algorithms on TA60, TA80, TA100, and TA120 are shown in Figure 12, Figure 13, Figure 14 and Figure 15. It can be seen that, for the same running time, the convergence speed of the MSDTLBO was the highest, when dealing with Taillard instances of different sizes, of the seven algorithms in the comparison.
Finally, Friedman’s test was used as a nonparametric approach to test the statistical validity of the experimental results and to rank the algorithms. In the Friedman test, all the comparison algorithms were treated as distinct factors. Table 4 shows the corresponding results, in which the lower the mean rank of the algorithm, the better the performance of the algorithm. Figure 16 shows the results of the Friedman test with MSDTLBO as the control algorithm. In Table 4, it can be clearly seen that the mean rank value of the MSDTLBO algorithm was 2.43, making it the highest-ranking of the seven algorithms. The algorithm thus significantly outperformed the other algorithms in the comparison.
Through the comparison and verification of the experiments above, it can be concluded that the algorithm proposed in this paper demonstrated an excellent performance in the NWFSP problem. The Gantt chart of the optimal solution of the NWFSP_Rec17 clearly shows the characteristics of the no-waiting constraint; that is, the machining process of the workpiece is continuous and uninterrupted until all the processes of the workpiece are finished.

7. Conclusions and Future Work

This study addressed the problems of the TLBO algorithm in solving the no-wait flow-shop-scheduling problem with the single evolutionary approach, decreasing diversity, inhomogeneity, and meaninglessness in the destruction process. The coding and decoding methods were redefined, and the basic integer-sequence-coding method was simple and efficient. In the teaching stage, the population was divided into three groups, according to the learning abilities of individuals, and different teaching strategies were adopted to achieve the effect of teaching according to the students’ needs, preventing the algorithm from falling into local optimum and improving its searching accuracy. In the local-search process, the destruction process of the IG algorithm was analyzed in terms of non-uniformity and meaninglessness, and the destruction–reconstruction of the iterative greedy algorithm was used as the main body, a knowledge base with finite knowledge validity was used to restrict the selection of destroyed artifacts, and the selection method of artifacts in the destruction process was dynamically changed to improve the destruction-and-reconstruction process. Finally, it was shown experimentally that the algorithm solution has advantages in terms of its evolutionary approach and diversity when it is used in NWFSP, and the effectiveness of the algorithm was also verified in NWFSP.
In future work, we plan to delve into more intricate shopfloor-scheduling problems, such as flexible-job-shop scheduling and distributed-shop-scheduling environments. Moreover, extensive research will be conducted on the meta-heuristic algorithm to effectively tackle the workshop-scheduling issue.

Author Contributions

Conceptualization, J.L., X.G. and Q.Z.; methodology, J.L., X.G. and Q.Z.; validation, X.G.; formal analysis, J.L. and Q.Z.; data curation, Q.Z.; writing—original draft preparation, X.G.; writing—review and editing, J.L.; visualization, X.G.; supervision, J.L. and Q.Z.; project administration, J.L. and Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China 62063021 (Research on HMS Scheduling Optimization and Control and Intelligence System in Manufacturing IoT Environment), National Natural Science Foundation of China 62162040 (Research on Terrain Representation and Dissemination-Adaptive-Scheduling Strategy for Large-Scale Social Network Influence Adaptability) and Nature Foundation of Gansu Province 20JR10A573 (Research on Intelligent Perception and Behavior Understanding Method Based on Audio and Video Fusion).

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Tao, L. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  2. Zhu, H.; Luo, N.; Li, X. A quantum-inspired cuckoo co-evolutionary algorithm for no-wait flow shop scheduling. IET Collab. Intell. Manuf. 2021, 3, 105–118. [Google Scholar] [CrossRef]
  3. Zhao, F.; He, X.; Zhang, Y.; Lei, W.; Ma, W.; Zhang, C.; Song, H. A jigsaw puzzle inspired algorithm for solving large-scale no-wait flow shop scheduling problems. Appl. Intell. 2020, 50, 87–100. [Google Scholar] [CrossRef]
  4. Marichelvam, M.; Tosun, Ö.; Geetha, M. Hybrid monkey search algorithm for flow shop scheduling problem under makespan and total flow time. Appl. Soft Comput. 2017, 55, 82–92. [Google Scholar] [CrossRef]
  5. Zhang, Q.; Yu, Z. Discrete fruit fly optimization algorithm based on dominant population for solving no-wait flow shop scheduling problem. Comput. Integr. Manuf. Syst. 2017, 23, 609–615. [Google Scholar]
  6. Liu, C.; Jian, Z.; Fu, W. A Discrete Krill Herd Algorithm for the No-wait Flow Shop Scheduling Problem. J. Syst. Simul. 2020, 32, 1051. [Google Scholar]
  7. Zuo, Y.; Fan, Z.; Zou, T.; Wang, P. A novel multi-population artificial bee colony algorithm for energy-efficient hybrid flow shop scheduling problem. Symmetry 2021, 13, 2421. [Google Scholar] [CrossRef]
  8. Wang, Y.; Peng, W.; Lu, C.; Xia, H. A Multi-Objective Cellular Memetic Optimization Algorithm for Green Scheduling in Flexible Job Shops. Symmetry 2022, 14, 832. [Google Scholar] [CrossRef]
  9. Zhang, J.; Cai, J. A Dual-Population Genetic Algorithm with Q-Learning for Multi-Objective Distributed Hybrid Flow Shop Scheduling Problem. Symmetry 2023, 15, 836. [Google Scholar] [CrossRef]
  10. Pan, Q.-K.; Tasgetiren, M.F.; Liang, Y.-C. A discrete particle swarm optimization algorithm for the no-wait flowshop scheduling problem. Comput. Oper. Res. 2008, 35, 2807–2839. [Google Scholar] [CrossRef]
  11. Engin, O.; Güçlü, A. A new hybrid ant colony optimization algorithm for solving the no-wait flow shop scheduling problems. Appl. Soft Comput. 2018, 72, 166–176. [Google Scholar] [CrossRef]
  12. Zhang, S.J.; Gu, X.S.; Zhou, F. An improved discrete migrating birds optimization algorithm for the no-wait flow shop scheduling problem. IEEE Access 2020, 8, 99380–99392. [Google Scholar] [CrossRef]
  13. Abdel-Basset, M.; Manogaran, G.; El-Shahat, D.; Mirjalili, S. A hybrid whale optimization algorithm based on local search strategy for the permutation flow shop scheduling problem. Future Gener. Comput. Syst. 2018, 85, 129–145. [Google Scholar] [CrossRef] [Green Version]
  14. Avci, M. An effective iterated local search algorithm for the distributed no-wait flowshop scheduling problem. Eng. Appl. Artif. Intell. 2023, 120, 105921. [Google Scholar] [CrossRef]
  15. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  16. Zhang, H.; He, P.; Yang, X. Fault detection based on multi-scale local binary patterns operator and improved teaching-learning-based optimization algorithm. Symmetry 2015, 7, 1734–1750. [Google Scholar] [CrossRef] [Green Version]
  17. Shao, W.; Pi, D.; Shao, Z. A hybrid discrete optimization algorithm based on teaching–probabilistic learning mechanism for no-wait flow shop scheduling. Knowl.-Based Syst. 2016, 107, 219–234. [Google Scholar] [CrossRef]
  18. Shao, W.; Pi, D.; Shao, Z. An extended teaching-learning based optimization algorithm for solving no-wait flow shop scheduling problem. Appl. Soft Comput. 2017, 61, 193–210. [Google Scholar] [CrossRef]
  19. Du, A.-R.; Qian, B.; Hu, R.; Zhang, C.-S.; Wang, L. Modified Teaching-learning-based Optimization Algorithm for No-wait Flow-shop Green Scheduling Problem. Control. Eng. China 2017, 65, 2218–2224. [Google Scholar]
  20. Zhao, F.; Qin, S.; Yang, G.; Ma, W.; Zhang, C.; Song, H. A factorial based particle swarm optimization with a population adaptation mechanism for the no-wait flow shop scheduling problem with the makespan objective. Expert Syst. Appl. 2019, 126, 41–53. [Google Scholar] [CrossRef]
  21. Qu, C.; Fu, Y.; Yi, Z.; Tan, J. Solutions to no-wait flow shop scheduling problem using the flower pollination algorithm based on the hormone modulation mechanism. Complexity 2018, 2018, 1973604. [Google Scholar] [CrossRef] [Green Version]
  22. Ding, J.-Y.; Song, S.; Gupta, J.N.; Zhang, R.; Chiong, R.; Wu, R. An improved iterated greedy algorithm with a Tabu-based reconstruction strategy for the no-wait flowshop scheduling problem. Appl. Soft Comput. 2015, 30, 604–613. [Google Scholar] [CrossRef]
  23. Nagano, M.S.; Almeida, F.S.; Miyata, H.H. An iterated greedy algorithm for the no-wait flowshop scheduling problem to minimize makespan subject to total completion time. Eng. Optim. 2021, 53, 1431–1449. [Google Scholar] [CrossRef]
  24. Zhao, F.; Qin, S.; Zhang, Y.; Ma, W.; Zhang, C.; Song, H. A hybrid biogeography-based optimization with variable neighborhood search mechanism for no-wait flow shop scheduling problem. Expert Syst. Appl. 2019, 126, 321–339. [Google Scholar] [CrossRef]
  25. Deng, G.; Su, Q.; Zhang, Z.; Liu, H.; Zhang, S.; Jiang, T. A population-based iterated greedy algorithm for no-wait job shop scheduling with total flow time criterion. Eng. Appl. Artif. Intell. 2020, 88, 103369. [Google Scholar] [CrossRef]
  26. Qian, B.; Zhang, Z.-Q.; Hu, R.; Jin, H.-P.; Yang, J.-B. A Matrix-Cube-Based Estimation of Distribution Algorithm for No-Wait Flow-Shop Scheduling with Sequence-Dependent Setup Times and Release Times. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 1492–1503. [Google Scholar] [CrossRef]
  27. Jin, X.; Xia, C.; Guan, N.; Zeng, P. Joint algorithm of message fragmentation and no-wait scheduling for time-sensitive networks. IEEE/CAA J. Autom. Sin. 2021, 8, 478–490. [Google Scholar] [CrossRef]
  28. Zhao, F.; He, X.; Wang, L. A two-stage cooperative evolutionary algorithm with problem-specific knowledge for energy-efficient scheduling of no-wait flow-shop problem. IEEE Trans. Cybern. 2020, 51, 5291–5303. [Google Scholar] [CrossRef]
  29. Zhao, F.; Xu, Z.; Wang, L.; Zhu, N.; Xu, T.; Jonrinaldi, J. A population-based iterated greedy algorithm for distributed assembly no-wait flow-shop scheduling problem. IEEE Trans. Ind. Inform. 2022, 19, 6692–6705. [Google Scholar] [CrossRef]
  30. Pan, Q.-K.; Wang, L.; Zhao, B.-H. An improved iterated greedy algorithm for the no-wait flow shop scheduling problem with makespan criterion. Int. J. Adv. Manuf. Technol. 2008, 38, 778–786. [Google Scholar] [CrossRef]
  31. Zhao, F.; Liu, H.; Zhang, Y.; Ma, W.; Zhang, C. A discrete water wave optimization algorithm for no-wait flow shop scheduling problem. Expert Syst. Appl. 2018, 91, 347–363. [Google Scholar] [CrossRef]
  32. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  33. Zhang, Q.; Zhang, B. Teaching-Learning-Based Optimization Algorithm for Permutation Flowshop Scheduling. J. Syst. Simul. 2022, 34, 1054–1063. [Google Scholar]
  34. Reeves, C.R. A genetic algorithm for flowshop sequencing. Comput. Oper. Res. 1995, 22, 5–13. [Google Scholar] [CrossRef]
  35. Lin, S.-W.; Ying, K.-C. Optimization of makespan for no-wait flowshop scheduling problems using efficient matheuristics. Omega 2016, 64, 115–125. [Google Scholar] [CrossRef]
Figure 1. Comparison chart of failure process.
Figure 1. Comparison chart of failure process.
Symmetry 15 01430 g001
Figure 2. Permutation-based mutation method.
Figure 2. Permutation-based mutation method.
Symmetry 15 01430 g002
Figure 3. Two-point inner crossing.
Figure 3. Two-point inner crossing.
Symmetry 15 01430 g003
Figure 4. Two-point outer crossing.
Figure 4. Two-point outer crossing.
Symmetry 15 01430 g004
Figure 5. Trend chart of ARE as a function of T.
Figure 5. Trend chart of ARE as a function of T.
Symmetry 15 01430 g005
Figure 6. Mean plot of 95% confidence interval with and without local search.
Figure 6. Mean plot of 95% confidence interval with and without local search.
Symmetry 15 01430 g006
Figure 7. The BRD of MSDTLBO, PAPSO, HMMFPA DWWO, TMIIG, DPSO, and IIGA for Rec instances.
Figure 7. The BRD of MSDTLBO, PAPSO, HMMFPA DWWO, TMIIG, DPSO, and IIGA for Rec instances.
Symmetry 15 01430 g007
Figure 8. The ARD of MSDTLBO, PAPSO, HMMFPA DWWO, TMIIG, DPSO, and IIGA for Rec instances.
Figure 8. The ARD of MSDTLBO, PAPSO, HMMFPA DWWO, TMIIG, DPSO, and IIGA for Rec instances.
Symmetry 15 01430 g008
Figure 9. Gantt chart for the optimal solution of NWFSP_Rec01.
Figure 9. Gantt chart for the optimal solution of NWFSP_Rec01.
Symmetry 15 01430 g009
Figure 10. Gantt chart for the optimal solution of NWFSP_Rec17.
Figure 10. Gantt chart for the optimal solution of NWFSP_Rec17.
Symmetry 15 01430 g010
Figure 11. The ARD of MSDTLBO, PAPSO, HMMFPA DWWO, TMIIG, DPSO, and IIGA for Taillard instances.
Figure 11. The ARD of MSDTLBO, PAPSO, HMMFPA DWWO, TMIIG, DPSO, and IIGA for Taillard instances.
Symmetry 15 01430 g011
Figure 12. TA060 convergence graph.
Figure 12. TA060 convergence graph.
Symmetry 15 01430 g012
Figure 13. TA080 convergence graph.
Figure 13. TA080 convergence graph.
Symmetry 15 01430 g013
Figure 14. TA100 convergence graph.
Figure 14. TA100 convergence graph.
Symmetry 15 01430 g014
Figure 15. TA120 convergence graph.
Figure 15. TA120 convergence graph.
Symmetry 15 01430 g015
Figure 16. Rank obtained by Friedman test.
Figure 16. Rank obtained by Friedman test.
Symmetry 15 01430 g016
Table 1. Analysis of heterogenicity.
Table 1. Analysis of heterogenicity.
ProblemJobTmax = 1000Tmax = 2000Tmax = 5000
MSMRMSMRMSMR
Rec012013.6535220.08676.4531.553116.2
Rec193011.73746.416.32067.7526.972109.6
Rec31509.63544.7513.48359.1520.52793.85
Rec37758.02839.111.58255.717.15788.45
Table 2. Comparison of results based on Reeve’s benchmark set.
Table 2. Comparison of results based on Reeve’s benchmark set.
Instancem × nC*MSDTLBOPAPSOHMM-FPADWWOTMIIGDPSOVNDIIGA
BRDARDBRDARDBRDARDBRDARDBRDARDBRDARDBRDARD
Rec0120 × 515260.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec0320 × 513610.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec0520 × 515110.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec0720 × 1020420.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec0920 × 1020420.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec1120 × 1018810.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec1320 × 1525450.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec1520 × 1525290.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec1720 × 1525870.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Rec1930 × 1028500.000.000.000.160.000.000.000.000.000.000.000.000.000.03
Rec2130 × 1028210.000.000.000.160.000.150.000.160.000.030.000.110.000.19
Rec2330 × 1027000.000.000.000.140.000.010.000.130.000.000.000.070.000.05
Rec2530 × 1535930.000.000.000.090.000.090.000.090.000.000.000.170.000.00
Rec2730 × 1534310.000.000.000.210.000.070.000.020.320.320.000.260.000.12
Rec2930 × 1532910.000.000.000.070.000.130.000.240.000.140.000.060.000.00
Rec3150 × 1043070.000.200.390.770.130.200.230.410.090.290.120.390.510.78
Rec3350 × 1044240.000.360.091.000.000.540.270.620.000.400.250.720.791.24
Rec3550 × 1043970.000.130.391.220.000.830.140.390.000.210.000.390.410.78
Rec3775 × 2080080.270.560.671.070.250.670.250.480.310.550.270.691.141.43
Rec3975 × 2084190.200.610.450.960.200.650.400.670.170.480.190.700.911.34
Rec4175 × 2084370.140.400.660.980.000.690.170.490.070.480.360.711.111.24
AVG0.0290.110.1260.3250.0280.1920.0700.1760.0460.1380.0570.2030.2320.343
Table 3. Comparison of results based on Taillard’s benchmark set.
Table 3. Comparison of results based on Taillard’s benchmark set.
n × mMSDTLBOPAPSOHMM-FPADWWOTMIIGDPSOVNDIIGA
BRDARDBRDARDBRDARDBRDARDBRDARDBRDARDBRDARD
20 × 50.000.000.000.000.000.000.000.000.000.000.000.000.000.00
20 × 100.000.000.000.010.000.000.000.020.000.000.000.000.000.00
20 × 200.000.000.000.020.000.000.000.010.000.000.000.000.000.01
50 × 50.140.340.630.970.230.350.250.520.340.450.560.780.150.38
50 × 100.010.210.260.660.050.190.180.380.140.280.490.690.080.24
50 × 200.030.110.280.570.060.380.080.320.180.280.350.650.040.24
100 × 50.520.861.962.340.611.152.172.670.690.771.271.560.941.25
100 × 100.410.661.251.780.480.960.530.760.520.660.941.260.560.88
100 × 200.330.621.031.370.430.670.330.550.480.730.921.240.510.79
200 × 101.261.453.083.511.251.471.992.351.151.352.032.241.872.16
200 × 200.820.972.352.751.021.390.951.121.051.231.782.091.361.61
500 × 201.541.734.364.732.122.332.432.662.182.313.273.423.153.45
AVG0.420.581.271.560.520.740.740.950.560.670.931.160.720.92
Table 4. Ranking of comparison algorithms obtained by Friedman test.
Table 4. Ranking of comparison algorithms obtained by Friedman test.
AlgorithmMSDTLBOPAPSOHMM-FPADWWOTMIIGDPSOVNDIIGA
Mean Rank2.436.243.154.042.915.213.90
Chi-Square351.803
p-value6.3191 × 10−73
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Guo, X.; Zhang, Q. Multi-Strategy Discrete Teaching–Learning-Based Optimization Algorithm to Solve No-Wait Flow-Shop-Scheduling Problem. Symmetry 2023, 15, 1430. https://doi.org/10.3390/sym15071430

AMA Style

Li J, Guo X, Zhang Q. Multi-Strategy Discrete Teaching–Learning-Based Optimization Algorithm to Solve No-Wait Flow-Shop-Scheduling Problem. Symmetry. 2023; 15(7):1430. https://doi.org/10.3390/sym15071430

Chicago/Turabian Style

Li, Jun, Xinxin Guo, and Qiwen Zhang. 2023. "Multi-Strategy Discrete Teaching–Learning-Based Optimization Algorithm to Solve No-Wait Flow-Shop-Scheduling Problem" Symmetry 15, no. 7: 1430. https://doi.org/10.3390/sym15071430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop