Disclosure of Invention
In view of this, the embodiments of the present invention provide a method and an apparatus for allocating a multitasking set to a cache way, which can reasonably allocate the multitasking set to the cache way, so as to solve the problems that the cache way resources are wasted and the cache way is blocked.
To achieve the above object, according to one aspect of an embodiment of the present invention, there is provided a method for allocating a multitasking set to a cache way, including:
Applying for a cache way according to the task type of the multi-task set, and establishing a binding relation between the multi-task set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-task set;
Acquiring the number of idle cache bits of a cache way in real time, and issuing a goods picking container to the bound cache way according to the number of the idle cache bits;
judging whether the goods picking container contained in the multi-task set is completely issued to the bound cache way, if yes, releasing the binding relation between the multi-task set and the cache way, and enabling the cache way to enter a state to be bound.
Optionally, applying for the cache way according to the task type of the multitasking set, and establishing the binding relationship between the multitasking set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers included in the multitasking set includes:
judging whether the multitasking set has a history binding cache way or not;
If yes, returning the history binding cache way, and establishing a binding relation between the multitasking set and the history binding cache way;
If not, determining a returned cache way in the cache way corresponding to the task type of the multi-task set according to the priority information of the cache way, and establishing a binding relation between the multi-task set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-task set.
Optionally, the determining the returned cache way according to the priority information of the cache way includes:
determining priority information of the cache channels according to the number of the current idle cache bits, wherein the higher the number of the current idle cache bits is, the higher the priority of the cache channels is;
if the number of the current idle cache bits is the same, determining priority information of the cache ways according to the single-day arrival amount, wherein the priority of the cache ways is higher when the current single-day arrival amount is smaller;
And determining returned cache channels according to the priority information of the cache channels, wherein the number of the returned cache channels does not exceed the number of the multi-task sets of the cache channels currently applied.
Optionally, establishing a binding relationship between the multitasking set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multitasking set, including:
Determining an absolute value of a difference between a current number of free cache bits in the returned cache way and a number of pick containers contained in the multitasking set;
and establishing a binding relation between the multi-task set and the cache track according to the sequence of the absolute value of the difference value from small to large, wherein the multi-task set and the cache track are bound one by one.
Optionally, the acquiring the number of idle cache bits of the cache way in real time, and issuing the picking container to the bound cache way according to the number of idle cache bits includes:
Acquiring the number of idle cache bits of a cache channel in real time according to the state of photoelectric equipment arranged on the cache bits;
Issuing a goods picking container which does not exceed the number of the idle buffer bits of the buffer channel to the bound buffer channel according to the number of the idle buffer bits of the buffer channel;
until the pick container contained by the multitasking collection is all issued to the bound cache way.
Optionally, before the applying for the cache way according to the task type of the multitasking set, the method further includes:
Presetting application time;
After the application time is reached, a multi-task set in which the pick container has all reached the staging area is determined, and task types of the multi-task set are determined.
According to another aspect of an embodiment of the present invention, there is provided an apparatus for allocating a multitasking set to a cache way, including:
a binding module for: applying for a cache way according to the task type of the multi-task set, and establishing a binding relation between the multi-task set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-task set;
a issuing module, configured to: acquiring the number of idle cache bits of a cache way in real time, and issuing a goods picking container to the bound cache way according to the number of the idle cache bits;
The unbinding module is used for: judging whether the goods picking container contained in the multi-task set is completely issued to the bound cache way, if yes, releasing the binding relation between the multi-task set and the cache way, and enabling the cache way to enter a state to be bound.
Optionally, the binding module is further configured to:
judging whether the multitasking set has a history binding cache way or not;
if yes, returning the history binding cache way, and establishing a binding relation between the multitasking set and the history binding cache way;
If not, determining a returned cache way in the cache way corresponding to the task type of the multi-task set according to the priority information of the cache way, and establishing a binding relation between the multi-task set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-task set.
Optionally, the binding module is further configured to:
determining priority information of the cache channels according to the number of the current idle cache bits, wherein the higher the number of the current idle cache bits is, the higher the priority of the cache channels is;
if the number of the current idle cache bits is the same, determining priority information of the cache ways according to the single-day arrival amount, wherein the priority of the cache ways is higher when the current single-day arrival amount is smaller;
And determining returned cache channels according to the priority information of the cache channels, wherein the number of the returned cache channels does not exceed the number of the multi-task sets of the cache channels currently applied.
Optionally, the binding module is further configured to:
Determining an absolute value of a difference between a current number of free cache bits in the returned cache way and a number of pick containers contained in the multitasking set;
and establishing a binding relation between the multi-task set and the cache track according to the sequence of the absolute value of the difference value from small to large, wherein the multi-task set and the cache track are bound one by one.
Optionally, the issuing module is further configured to:
Acquiring the number of idle cache bits of a cache channel in real time according to the state of photoelectric equipment arranged on the cache bits;
Issuing a goods picking container which does not exceed the number of the idle buffer bits of the buffer channel to the bound buffer channel according to the number of the idle buffer bits of the buffer channel;
until the pick container contained by the multitasking collection is all issued to the bound cache way.
Optionally, the binding module is further configured to:
Presetting application time;
After the application time is reached, a multi-task set in which the pick container has all reached the staging area is determined, and task types of the multi-task set are determined.
According to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus including: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the method for distributing the multi-task set to the cache way.
According to yet another aspect of embodiments of the present invention, there is provided a computer readable medium having stored thereon a computer program which when executed by a processor implements a method of allocating a set of multiple tasks to a cache way as provided by embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: because the technical means of monitoring the number of idle buffer bits of the buffer channel in real time is adopted, the multitasking set is reasonably distributed to the buffer channel, the technical problems that the buffer channel resources are wasted and the buffer channel is abnormal due to congestion are solved, and the technical effects of improving the efficiency of the conveying line and reducing the waiting time and the abnormal progress are achieved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Technical terms appearing in the embodiments of the present invention are explained as follows:
Cache way: the system binds the task type with the physical crossing number to generate a concept, and when the task is issued, the designated crossing can be screened according to the task type.
Aggregate sheet (i.e., multitasking aggregate): the multiple aggregate sheet represents an aggregate sheet containing a plurality of pick containers.
Cache bits: the buffer track is used for storing the position of the picking container.
Concentration area: the temporary storage area of the picking container is used for collecting the picking containers with multiple collection sheets in the collecting area, and the subsequent operation is carried out.
Single day arrival amount: the number of pick-up containers received one day by the current time cache.
History bound cache way: the single bound cache ways are aggregated.
FIG. 1 is a schematic diagram of a basic flow of a method for allocating a multitasking set to a cache way according to an embodiment of the invention, as shown in FIG. 1, a method for allocating a multitasking set to a cache way, comprising:
S101, applying for a cache way according to a task type of a multi-aggregate list (simply referred to as an aggregate list), and establishing a binding relation between the multi-aggregate list and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers (simply referred to as containers) contained in the multi-aggregate list;
s102, acquiring the number of idle cache bits of a cache way in real time, and issuing a goods picking container to the bound cache way according to the number of the idle cache bits;
And S103, judging whether the goods picking container contained in the multi-set list is completely issued to the bound cache channel, if so, releasing the binding relation between the multi-set list and the cache channel, and enabling the cache channel to enter a state to be bound.
According to the embodiment of the invention, the technical means of monitoring the number of the idle buffer bits of the buffer channel in real time is adopted, so that the multi-aggregate list is reasonably distributed to the buffer channel, the technical problems that the buffer channel resources are wasted and the buffer channel is abnormal due to congestion are solved, and the technical effects of improving the efficiency of the conveying line and reducing the waiting time and the abnormal progress are achieved.
In step S101 of the embodiment of the present invention, the applying for the cache way according to the task type of the multi-aggregate list, and establishing a binding relationship between the multi-aggregate list and the cache way according to the number of current idle cache bits in the returned cache way and the number of pick containers included in the multi-aggregate list includes: judging whether the multi-set list has a history binding cache track or not; if yes, returning the history binding cache way, and establishing a binding relation between the multi-aggregate list and the history binding cache way; if not, determining a returned cache way in the cache way corresponding to the task type of the multi-set list according to the priority information of the cache way, and establishing a binding relation between the multi-set list and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-set list.
Based on the foregoing embodiments, in the embodiment of the present invention, determining a returned cache way according to the priority information of the cache way includes: determining priority information of the cache channels according to the number of the current idle cache bits, wherein the higher the number of the current idle cache bits is, the higher the priority of the cache channels is; if the number of the current idle cache bits is the same, determining priority information of the cache ways according to the single-day arrival amount, wherein the priority of the cache ways is higher when the current single-day arrival amount is smaller; and determining returned cache channels according to the priority information of the cache channels, wherein the number of the returned cache channels does not exceed the number of the multi-set list of the cache channels currently applied. And determining the returned cache way according to the current priority information of the cache way, so that the allocation of the collection list is more reasonable.
In step S101 of the embodiment of the present invention, establishing a binding relationship between the multiple aggregate list and a cache track according to the number of current idle cache bits in the returned cache track and the number of pick containers included in the multiple aggregate list includes: determining an absolute value of a difference between the number of currently free cache bits in the returned cache way and the number of pick containers contained in the multi-collection list; and establishing the binding relation between the multi-set list and the cache track according to the sequence of the absolute value of the difference value from small to large. The multi-set list and the cache way are bound one by one, namely one multi-set list and one cache way, and one cache way is also bound with one set list at a time. The binding relation is established according to the absolute value of the difference value, so that the distribution of the collection list is more reasonable.
In step S101 of the embodiment of the present invention, before the application of the cache way according to the task type of the multiple aggregate list, the method further includes: presetting application time; after the application time is reached, a multi-aggregate list of all the pick containers reaching the staging area is determined, and the task type of the multi-aggregate list is determined. After the concentration areas Ji Duo in the embodiment of the invention are combined to collect all the picking containers under a single unit, the cache way is applied in batches at fixed time, and compared with the prior art, the method is more accurate and efficient.
In step S102 of the embodiment of the present invention, the acquiring, in real time, the number of idle buffer bits of the buffer track, and issuing the picking container to the bound buffer track according to the number of idle buffer bits includes: acquiring the number of idle cache bits of a cache channel in real time according to the state of photoelectric equipment arranged on the cache bits; issuing a goods picking container which does not exceed the number of the idle buffer bits of the buffer channel to the bound buffer channel according to the number of the idle buffer bits of the buffer channel; until the pick container contained in the multi-collection sheet is issued to the bound cache way. Because the buffer channel has the new characteristic of empty box position, even if the picking container is not moved down in time, the later overflow of the picking container can not happen, and the maximization of the conveying line efficiency is ensured. Because the buffer memory bits are provided with the corresponding photoelectric devices, when the photoelectric devices are changed, the devices can report to the WCS (logistics management system-base layer device logistics control automation system) in time, and the WCS can update the vacancy condition of the buffer memory channel at the first time. Thus, even if the picking container is not removed, the WMS (warehouse management system) can issue a specified number of container delivery tasks according to the real-time vacancy information of the WCS. In the normal condition of the system, the condition of overflowing of the container does not occur. And because the empty space is updated in real time, the binding relation can be released by the aggregation of the aggregation sheets, the next aggregation sheet can be bound at the first time, and the empty space of the previous aggregation sheet can be utilized in time.
In step S102 of the embodiment of the present invention, the method further includes: after the pick container is moved to the bound cache way, a review operation is performed on the pick container. And after each picking container reaches the cache way, the rechecking operation is carried out, so that staff does not need to care whether all the picking containers are gathered, and the rechecking speed is improved.
Fig. 2 is a schematic diagram of a preferred flow of a method for assigning a multi-task set to a cache way according to an embodiment of the present invention, as shown in fig. 2, in which arrival of pick-up containers in the same batch of collection sheets in a collection area is monitored, and each time a pick-up container arrives in the collection area, the arrival of the pick-up container is reported to a WMS, and the WMS determines whether all pick-up containers of one collection sheet are collected in the collection area. After judging that the task is completed, the WMS applies for a free cache way of a specified task type to the WCS. The WCS recommends a cache way to return to the WMS according to a preset rule (or priority information of the cache way). The WMS has the total number of pick containers under each collection sheet, and periodically determines whether there are collected collection sheets in the collection area, and periodically applies for available cache tracks in batches. The WCS can update the buffer channel vacancy fed back by the equipment in real time, so that the WMS can apply for a proper buffer channel in the first time. WMS applies for an idle cache way, WCS return cache way logic specifically is: the WMS transmits in the task type and the collection list number, and the WCS preferentially returns the buffer track bound by the collection list. If the buffer channel is not bound, the road junction with more vacancies and less arrival amount in a single day is preferentially allocated to return according to the corresponding task type. The returned crossing is locked in the system to prevent concurrent application, after that, the WMS distributes the applied buffer track by itself, and the WCS interface is called for binding relation according to the number of the vacant sites. The WCS feeds back whether the binding was successful. The WMS selects a cache way and applies for the relation binding of the collection list and the cache way to the WCS, so that other collection lists are prevented from being occupied. And the WMS issues a corresponding number of out-concentrated zone tasks to the WCS according to the number of the idle buffer bits. The WCS notification device outputs the corresponding pick container from the staging area to the designated cache crossing. And after each picking container arrives at the cache crossing, performing a rechecking operation, reporting the number of the arriving picking containers to the WMS by the equipment, judging whether all the picking containers under the collection list arrive at the cache crossing by the WMS, and releasing the cache after all the picking containers arrive at the cache crossing, wherein the cache can receive a new binding relation, so that the idle cache bit in the cache can allow the picking containers under other collection lists to occupy.
FIG. 3 is a schematic diagram of basic modules of an apparatus for allocating a multitasking set to a cache way according to an embodiment of the invention, as shown in FIG. 3, an apparatus for allocating a multitasking set to a cache way, comprising:
a binding module 301 for: applying for a cache way according to the task type of the multi-aggregate list, and establishing a binding relation between the multi-aggregate list and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-aggregate list;
A issuing module 302, configured to: acquiring the number of idle cache bits of a cache way in real time, and issuing a goods picking container to the bound cache way according to the number of the idle cache bits;
Unbinding module 303, configured to: judging whether the goods picking container contained in the multi-collection list is completely issued to the bound cache channel, if so, releasing the binding relation between the multi-collection list and the cache channel to enable the cache channel to enter a to-be-bound state.
In the embodiment of the present invention, the binding module 301 is further configured to: judging whether the multi-set list has a history binding cache track or not; if so, returning the history bound cache way, and establishing a binding relation between the multi-aggregate list and the history bound cache way; if the task type of the multi-collection list does not exist, determining a returned cache way in the cache way corresponding to the task type of the multi-collection list according to the priority information of the cache way, and establishing a binding relation between the multi-collection list and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-collection list.
In the embodiment of the present invention, the binding module 301 is further configured to: determining priority information of the cache channels according to the number of the current idle cache bits, wherein the higher the number of the current idle cache bits is, the higher the priority of the cache channels is; if the number of the current idle cache bits is the same, determining priority information of the cache ways according to the single-day arrival amount, wherein the priority of the cache ways is higher when the current single-day arrival amount is smaller; and determining returned cache channels according to the priority information of the cache channels, wherein the number of the returned cache channels does not exceed the number of the multi-set list of the cache channels currently applied.
In the embodiment of the present invention, the binding module 301 is further configured to: determining an absolute value of a difference between the number of currently free cache bits in the returned cache way and the number of pick containers contained in the multi-collection list; and establishing a binding relation between the multi-set list and the cache tracks according to the sequence of the absolute value of the difference value from small to large, wherein the multi-set list and the cache tracks are bound one by one.
In the embodiment of the present invention, the issuing module 302 is further configured to: acquiring the number of idle cache bits of a cache channel in real time according to the state of photoelectric equipment arranged on the cache bits; issuing a goods picking container which does not exceed the number of the idle buffer bits of the buffer channel to the bound buffer channel according to the number of the idle buffer bits of the buffer channel; until the pick container contained in the multi-collection sheet is issued to the bound cache way.
In the embodiment of the invention, the device further comprises a rechecking module for: after the pick container is moved to the bound cache way, a review operation is performed on the pick container.
In the embodiment of the present invention, the binding module 301 is further configured to: presetting application time; after the application time is reached, a multi-aggregate list of all the pick containers reaching the staging area is determined, and the task type of the multi-aggregate list is determined.
FIG. 4 illustrates an exemplary system architecture 400 of a method of multi-tasking set allocation to cache ways or an apparatus of multi-tasking set allocation to cache ways to which embodiments of the present invention may be applied.
As shown in fig. 4, the system architecture 400 may include terminal devices 401, 402, 403, a network 404, and a server 405. The network 404 is used as a medium to provide communication links between the terminal devices 401, 402, 403 and the server 405. The network 404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 405 via the network 404 using the terminal devices 401, 402, 403 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 401, 402, 403.
The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 405 may be a server providing various services, such as a background management server providing support for shopping-type websites browsed by the user using the terminal devices 401, 402, 403. The background management server can analyze and other data of the received product information inquiry request and feed back the processing result to the terminal equipment.
It should be noted that, the method for allocating the multi-set list to the cache ways provided in the embodiment of the present invention is generally executed by the server 405, and accordingly, the apparatus for allocating the multi-set list to the cache ways is generally disposed in the server 405.
It should be understood that the number of terminal devices, networks and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
According to an embodiment of the present invention, the present invention also provides an electronic device and a computer-readable storage medium.
An electronic device according to an embodiment of the present invention includes: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for distributing the multi-task set to the cache way.
A computer readable medium of an embodiment of the present invention has stored thereon a computer program which, when executed by a processor, implements the method for allocating a multitasking set to a cache way provided by the present invention.
Referring now to FIG. 5, there is illustrated a schematic diagram of a computer system 500 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 5 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU) 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 501.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor, a binding module, a issuing module and a unbinding module. The names of these modules do not in any way constitute a limitation on the module itself, for example, a binding module, and may also be described as "a module for establishing a binding relationship between the multi-aggregate list and the cache way".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: applying for a cache way according to the task type of the multi-task set, and establishing a binding relation between the multi-task set and the cache way according to the number of current idle cache bits in the returned cache way and the number of picking containers contained in the multi-task set; acquiring the number of idle cache bits of a cache way in real time, and issuing a goods picking container to the bound cache way according to the number of the idle cache bits; judging whether the goods picking container contained in the multi-task set is completely issued to the bound cache channel, if so, releasing the binding relation between the multi-task set and the cache channel, and enabling the cache channel to enter a state to be bound.
According to the technical scheme provided by the embodiment of the invention, the technical means of monitoring the number of idle buffer bits of the buffer channel in real time is adopted, so that the multitasking set is reasonably distributed to the buffer channel, the technical problems that the buffer channel resources are wasted and the buffer channel is abnormal due to congestion are solved, and the technical effects of improving the efficiency of the conveying line and reducing the waiting time and the abnormal progress are further achieved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.