CN115051958A - Cache allocation method, device and equipment - Google Patents
Cache allocation method, device and equipment Download PDFInfo
- Publication number
- CN115051958A CN115051958A CN202210387595.5A CN202210387595A CN115051958A CN 115051958 A CN115051958 A CN 115051958A CN 202210387595 A CN202210387595 A CN 202210387595A CN 115051958 A CN115051958 A CN 115051958A
- Authority
- CN
- China
- Prior art keywords
- cache
- preset
- port
- limit value
- pools
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 239000000872 buffer Substances 0.000 claims abstract description 85
- 238000004891 communication Methods 0.000 claims abstract description 11
- 230000005540 biological transmission Effects 0.000 claims description 40
- 238000012545 processing Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 238000004220 aggregation Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 5
- 238000007726 management method Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/83—Admission control; Resource allocation based on usage prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/82—Miscellaneous aspects
- H04L47/827—Aggregation of resource allocation or reservation requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/78—Architectures of resource allocation
- H04L47/788—Autonomous allocation of resources
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
Abstract
The application provides a cache allocation method, a cache allocation device and cache allocation equipment, which relate to the technical field of communication, and the method comprises the following steps: when a plurality of low-rate services are transmitted in one channel, a target port of the received service message is determined according to the received service message, a preset cache upper limit value, a preset cache lower limit value and a current cache occupation of the target port are obtained, and a target number of cache pools are distributed from a shared cache to the target port according to the current cache occupation, the preset cache upper limit value and the preset cache lower limit value. According to the technical scheme, the preset upper buffer limit value and the preset lower buffer limit value are configured for the port, when buffer sharing is carried out, the buffer pool can be reasonably allocated for the port according to the preset upper buffer limit value, the preset lower buffer limit value and the current occupied buffer value, a large amount of shared buffer resources occupied by a certain type of service in an emergency situation are avoided, other services can be guaranteed to be allocated with proper buffer resources, and the utilization rate of the resources is improved.
Description
Technical Field
The present application relates to the field of network communication technologies, and in particular, to a method, an apparatus, and a device for allocating a cache.
Background
With the development of network technologies, in order to improve the bandwidth utilization of a transmission channel, multiple low-rate services are usually transmitted in one transmission channel, and these services need to allocate cache resources during transmission, where the cache resources that different services may need are different, and in order to avoid the problem of cache resource shortage, the cache resources may be shared among the services, so as to achieve the purpose of saving the cache resources.
In the prior art, when the cache resources are shared, the estimated occupied amount of the shared cache space required to be occupied by the message enqueue is estimated mainly according to the historical cache use condition, and then the allocation amount of the cache resources is determined based on the priority of the message enqueue and the estimated occupied amount.
However, firstly, the history buffer usage of the packet cannot completely reflect the future buffer usage, and the calculation of the history resource of the buffer may increase the complexity, and its algorithm may also consume a part of resources, and secondly, when there is an emergency in a certain type of service, the allocation method in the prior art may occupy a large amount of shared buffer resources, so that no buffer is available for other types of services, which may cause the service to be discarded completely, and thus the service transmission effect becomes worse.
Disclosure of Invention
The application provides a cache allocation method, a cache allocation device and cache allocation equipment, which are used for solving the problem that when shared cache resources are shared among existing services and occupied by a certain service, other service messages are not available in a cache, and the service transmission effect is poor.
In a first aspect, an embodiment of the present application provides a buffer allocation method, which is applied to a transmission device in communication, where the transmission device transmits two or more low-rate services in one channel, and the method includes:
determining a target port of a received service message according to the received service message;
acquiring a preset cache upper limit value, a preset cache lower limit value and current cache occupation of the target port;
and allocating a target number of cache pools to the target port from a shared cache according to the current cache occupation, a preset cache upper limit value and a preset cache lower limit value, wherein the shared cache is used for performing cache sharing among the ports, and comprises at least one cache pool used for caching service messages.
In a possible design of the first aspect, the allocating a target number of cache pools from a shared cache to the target port according to the current cache occupancy, a preset cache upper limit value, and a preset cache lower limit value includes:
determining the number of the cache pools currently occupied by the target port according to the current cache occupation;
if the number is equal to the preset cache upper limit value, stopping allocating a cache pool from the shared cache to the target port;
if the number is smaller than the preset cache upper limit value, acquiring the sum of the preset cache lower limit values of all the ports;
determining the target number according to the sum and the number of the current residual cache pools of the shared cache;
and allocating the target number of cache pools from the shared cache to the target port.
In another possible design of the first aspect, the method further includes:
dividing the shared cache into more than two cache pools and storing the cache pools into a preset queue;
and identifying each cache pool, and determining the sequence number of each cache pool, wherein the sequence number is used for indicating the sequence of the cache pools in the preset queue.
In still another possible design of the first aspect, the method further includes:
and selecting a target number of buffer pools from the preset queue to distribute to the target ports according to the sequence of each buffer pool in the preset queue.
In yet another possible design of the first aspect, the method further includes:
acquiring a sequence number of a cache pool allocated to the target port;
and reading the service message cached in the cache pool allocated to the target port according to the sequence number.
In yet another possible design of the first aspect, the method further includes:
and after the service message cached in the cache pool is read, storing the cache pool back to the preset queue.
In yet another possible design of the first aspect, the method further includes:
acquiring a current cache occupation value and a preset cache lower limit value of each port in the transmission equipment;
determining the total amount of the sharable cache pool according to the total amount of the cache pool and the sum of the preset cache lower limit values of the ports;
determining the number of sharable cache pools occupied by the current port according to the current cache occupation value of each port and a lower limit value of a preset cache;
when the total amount of the sharable cache pools is less than the sum of the number of the sharable cache pools occupied by each port, determining the port with the current cache occupation value greater than or equal to the preset cache lower limit value as a port to be processed;
discarding the service message received by the port to be processed;
and storing the cache pool allocated to the port to be processed back to the shared cache.
In yet another possible design of the first aspect, the method further includes:
when the current cache occupation of the target port is larger than the preset cache upper limit value, discarding the service message currently received by the target port;
and returning the cache pool caching the service message in the target port to the shared cache.
In a second aspect, an embodiment of the present application provides a cache allocation apparatus, including:
the determining module is used for determining a target port for receiving the service message according to the received service message;
the acquisition module is used for acquiring a preset cache upper limit value, a preset cache lower limit value and current cache occupation of the target port;
and the distribution module is used for distributing a target number of cache pools to the target ports from a shared cache according to the current cache occupation, a preset cache upper limit value and a preset cache lower limit value, the shared cache is used for cache sharing among the ports, the shared cache comprises at least one cache pool, and the cache pool is used for caching the service message.
In a third aspect, an embodiment of the present application provides a transmission device, including: a processor, and a memory communicatively coupled to the processor;
the memory stores execution instructions;
the processor executes the execution instructions stored by the memory to implement the method as described above.
According to the cache allocation method, the device and the equipment provided by the embodiment of the application, the preset cache upper limit value and the preset cache lower limit value are configured for the port, when cache sharing is performed, the cache pool can be reasonably allocated for the port according to the preset cache upper limit value, the preset cache lower limit value and the currently occupied cache value, a phenomenon that a certain type of service occupies a large amount of shared cache resources in an emergency situation is avoided, it is ensured that other services can also allocate appropriate cache resources, and the transmission effect of multiple services is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application;
fig. 1 is a schematic view of a first scenario of a cache allocation method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a cache allocation method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a shared cache according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a second embodiment of a cache allocation method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a first embodiment of a cache allocation apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a cache allocation apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a transmission device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms referred to in this application are explained first:
sharing the cache:
the shared cache is a common resource allocation strategy in the field of network communication, a plurality of services share the cache space, and the space required by storage can be better saved when the plurality of services are transmitted in one channel.
Fig. 1 is a schematic view of a scenario of a cache allocation method provided in an embodiment of the present application, as shown in fig. 1, a transmission device 10 is connected to a terminal device 11, taking the transmission device 10 as a switch and the terminal device 11 as a computer device as an example, when a port of the switch receives a first data packet of a data stream, a source MAC address field of the data packet is read, and a source MAC address is associated with a receiving port. If the transmission channel of the port only transmits a low-rate service, the waste of bandwidth is easily caused. Therefore, a mode of transmitting a plurality of low-rate services in a transmission channel is provided for convergence and convergence processing, but the processing modes of the multi-rate services of the same transmission channel are different, the required cache sizes are different, under the condition of more service types, a large amount of cache resources are required for allocating the cache for each rate of service fixedly, a mode of saving the cache resources is provided for the shared cache, and the shared cache is that the cache resources are shared by the services of various rates under the condition of insufficient cache resources, so that the full utilization of the cache resources is realized.
In the practical application process, when the cache resources are shared, when different types of services are transmitted, a proper amount of cache can be allocated to the service from the shared cache according to the cache occupation amount currently required by the service, so as to ensure the normal transmission of the service. In the prior art, when a cache is shared, the shared cache resources occupied by services with different priorities are reasonably distributed mainly by estimating the occupation amount of cache resources of the services in advance and setting priorities for the services. However, in the method in the prior art, when a certain type of service bursts, the shared cache is easily occupied by the service in a large amount, so that other types of services are not available without the cache, and other services are all discarded, thereby reducing the service transmission effect, and the occupancy of the historical cache cannot accurately reflect the occupancy of the future cache, so that the cache estimation increases the algorithm complexity and consumes additional resources.
In view of the above problems, embodiments of the present application provide a method, an apparatus, and a device for allocating a cache, in which a shared cache is divided into multiple cache pools, and a preset cache upper limit value and a preset cache lower limit value are configured for a port, when performing cache sharing, the cache pools can be reasonably allocated for the port according to the preset cache upper limit value, the preset cache lower limit value, and a currently occupied cache value, so that a certain type of service is prevented from occupying a large amount of shared cache resources in an emergency, it is ensured that other services can also allocate appropriate cache resources, and a transmission effect of multiple services is improved. Meanwhile, cache estimation and priority setting are not needed, the cache control process is simplified, and the resource utilization rate is improved.
The technical solution of the present application will be described in detail below with reference to specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a schematic flow chart of a cache allocation method according to an embodiment of the present application. The method can be applied to transmission equipment such as a switch, wherein the transmission equipment comprises a plurality of ports and can transmit more than two low-rate services in the same channel based on the plurality of ports. As shown in fig. 2, the method may specifically include the following steps:
s201, determining a target port of the received service message according to the received service message.
Wherein the transmission device comprises more than two ports. Illustratively, each port is identified by a unique port number, and respective service messages are converged into the same channel for transmission through each port.
In this embodiment, multiple services may be transmitted in a transmission channel in an aggregation manner, and when a service packet is received, a de-aggregation may be performed first to determine a port number corresponding to the received packet, so as to determine a destination port according to the port number. Wherein, the port numbers corresponding to different messages may be different.
The service packet may specifically be an ethernet packet or the like.
S202, acquiring a preset cache upper limit value, a preset cache lower limit value and current cache occupation of the target port.
In this embodiment, the preset upper buffer limit value may be used to indicate a maximum number that can be allocated to the target port buffer pool, and the preset lower buffer limit value may be used to indicate a minimum number that is allocated to the target port buffer pool. The preset cache upper limit value and the preset cache lower limit value of each port can be different, and each port can be ensured to obtain an exclusive cache pool and a sharable cache pool by setting the preset cache upper limit value and the preset cache lower limit value, so that normal service transmission of each port is ensured.
And S203, distributing the target number of cache pools to the target ports from the shared cache according to the current cache occupation, the preset cache upper limit value and the preset cache lower limit value.
The shared cache is used for cache sharing among the ports, and comprises at least one cache pool used for caching the service messages.
Exemplarily, fig. 3 is a schematic structural diagram of a shared cache provided in the embodiment of the present application, as shown in fig. 3, the shared cache may be divided into 256 cache pools, which are arranged from top to bottom according to a queue order, the shared cache is allocated according to a first-in first-out manner, that is, an available cache pool is taken out from a queue head when the cache pool is allocated, and the available cache pool is placed at a tail end of the queue when the cache pool is returned.
For example, the size of the cache pool is configurable, and the number of the cache pools is also configurable.
In this embodiment, when allocating the cache pool from the shared cache to the target port, the number of the remaining cache pools that can be allocated in the shared cache may be detected first. For example, when detecting the number of remaining allocable buffer pools in the shared buffer, the number of remaining allocable buffer pools in the shared buffer needs to be determined according to a sum of preset buffer lower limit values of all ports of the transmission device, that is, a difference is made between a total number of buffer pools of the shared buffer and a sum of preset buffer lower limit values of all ports, and the number of remaining allocable buffer pools in the shared buffer is finally determined.
For example, when it is detected that the number of remaining allocable cache pools is large and the current cache requirement of the target port can be met (that is, the target number is smaller than the number of remaining allocable cache pools), the target number of cache pools may be allocated to the target port from the shared cache directly according to the current cache occupancy of the target port, the preset cache lower limit value and the preset cache upper limit value. When it is detected that the number of the remaining allocable cache pools is small and cannot meet the current cache demand of the target port, the number of the remaining allocable cache pools can be directly used as the target number.
When the target number is determined, the current cache occupation may be compared with a preset cache upper limit value, and when the preset cache upper limit value is equal to the current cache occupation, the target number may be 0, that is, the cache pool does not need to be allocated to the target port, and the service packet may be discarded due to lack of the cache pool. If the current cache occupation is between the preset cache upper limit value and the preset cache lower limit value, the target number of cache pools can be continuously allocated to the target ports. The target number may be specifically determined according to a cache space required to be occupied by the service packet, and the final target number may neither exceed the number of remaining assignable cache pools of the shared cache currently nor enable the current cache occupancy of the allocated target port to exceed a preset cache upper limit value, and if the current cache occupancy is smaller than the cache lower limit value, the target number of cache pools may continue to be allocated to the target port.
According to the embodiment of the application, the shared cache is divided into the plurality of cache pools, the preset cache upper limit value and the preset cache lower limit value are set, the allocation and management of the cache can be simplified during cache sharing, priority does not need to be set for each port, cache occupation does not need to be estimated, and resource occupation is reduced. Meanwhile, when a certain service has burst and occupies a large amount of shared cache, other services can also guarantee that cache resources can be obtained based on a preset cache lower limit value, and the condition that other services are not available in cache is avoided.
In some embodiments, when a target number of cache pools are allocated to a target port, the sequence numbers of the cache pools may be recorded (e.g., sequence numbers 0-255). Subsequent data reading can be conveniently carried out by recording the serial numbers, namely, the service messages stored in the cache pool can be conveniently read in sequence.
In some embodiments, the step S203 may specifically include the following steps:
determining the number of cache pools currently occupied by the target port according to the current cache occupation;
if the number is equal to the preset upper limit value of the cache, stopping distributing the cache pool from the shared cache to the target port;
if the number is smaller than the preset cache upper limit value, the sum of the preset cache lower limit values of all the ports is obtained;
determining the target number according to the sum and the number of the remaining cache pools of the shared cache;
and allocating a target number of cache pools from the shared cache to the target ports.
In this embodiment, in order to avoid a situation that the target port has a too large service burst and excessively occupies the shared cache, which results in that no cache is available for other ports, by setting the preset cache upper limit value, when the number of the cache pools currently occupied by the target port is equal to the preset cache upper limit value, no cache pool is allocated to the target port, and the service packet of the port is discarded.
When the target port does not have a condition of too large burst, determining the number of the buffer pools which can be currently allocated by the shared cache according to the sum of the preset buffer lower limit values of the ports and the difference value of the number of the buffer pools currently contained by the shared cache (namely the total number of the buffer pools of the shared cache), and then determining the target number (namely the target number cannot exceed the number of the buffer pools which can be currently allocated) according to the number of the buffer pools which can be currently allocated by the shared cache and the number of the buffer pools which are required to be occupied by the current service packet.
According to the embodiment of the application, the preset cache upper limit value and the preset cache lower limit value are set, so that the problem that shared caches are occupied due to the fact that one port is too large in burst can be avoided, other ports do not have available caches, and effective utilization of resources is achieved.
Further, in some embodiments, a flow control mechanism may be further configured, that is, when the target port has too large service burst cache occupancy, the data packet may be discarded, and the inflow of the service packet may be controlled. The situation that the cache is greatly occupied due to the fact that the target port is too large in burst can be avoided through a flow control mechanism.
In some embodiments, the above method further comprises the steps of:
dividing a shared cache into more than two cache pools and storing the cache pools into a preset queue;
and identifying each cache pool, and determining the sequence number of each cache pool.
And the sequence number is used for indicating the sequence of the buffer pool in the preset queue. For example, referring to fig. 3, the shared buffer is divided into 256 buffer pools, and the preset queue may be a first-in first-out queue, where each buffer pool in the first-in first-out queue has a corresponding sequence number (e.g., 0-255).
In some embodiments, a usage table for recording usage of each cache pool and usage of the cache pool by each service may be set to implement management of the cache pools.
According to the method and the device, the sequence of each cache pool in the preset queue is identified by setting the sequence number, the number of the cache pools left at present in the preset queue can be quickly determined according to the sequence number, and cache pool allocation is conveniently carried out. Meanwhile, when the data of the cache pool is read and needs to be stored back to the preset queue, the data can be sequentially and sequentially stored back to the preset queue according to the sequence number, and the ordered management of the cache pool is realized.
In some embodiments, the method may further include the steps of:
and selecting a target number of buffer pools from the preset queue to distribute to the target ports according to the sequence of each buffer pool in the preset queue.
For example, referring to fig. 3 above, taking the target number as 3 as an example, 3 cache pools are required to be selected from the preset queue and allocated to the target port, and according to the sequence of each cache pool in the preset queue, the selected 3 cache pools are cache pool 0, cache pool 1 and cache pool 2.
According to the embodiment of the application, the cache pools are sequentially distributed to the target ports through the first-in first-out sequence, when the cache pools are distributed to the ports, data reading operation can be carried out according to the serial numbers of the cache pools, and data reading is facilitated.
Further, in some embodiments, the method may further include the steps of:
acquiring a serial number of a cache pool allocated to a target port;
and reading the service message cached in the cache pool allocated to the target port according to the sequence number.
In this embodiment, for example, the service packets cached in each cache pool may be sequentially read according to the order of cache pool allocation. Illustratively, when the destination port sequentially allocates the cache pool 0, the cache pool 1, and the cache pool 2 for storing the service packet, the service packet stored in the cache pool 0 may be read first, then the service packet in the cache pool 1 may be read, and finally the service packet in the cache pool 2 may be read.
In some embodiments, the method may further include the steps of:
and storing the cache pool back to a preset queue after the service message cached in the cache pool is read.
In this embodiment, after the service packet cached in the cache pool is read, the cache pool is released and stored back to the preset queue, so that the cache allocation is continued next time, and cache sharing is realized.
In some embodiments, the method may further include the steps of:
acquiring a current cache occupation value and a preset cache lower limit value of each port in transmission equipment;
determining the total amount of the sharable cache pool according to the total amount of the cache pool and the sum of the preset cache lower limit values of all the ports;
determining the number of sharable cache pools occupied by the current port according to the current cache occupation value of each port and the lower limit value of the preset cache;
when the total amount of the sharable cache pools is less than the sum of the number of the sharable cache pools occupied by each port, determining the port of which the current cache occupation value is greater than or equal to a preset cache lower limit value as a port to be processed;
discarding the service message received by the port to be processed;
and storing the cache pool allocated to the port to be processed back to the shared cache.
In this embodiment, there is a case where a large amount of traffic bursts occupy the buffer for some ports, resulting in that other ports may not use the buffer. The number of the buffer pools available for sharing (the number of the buffer pools available for sharing is the total number of the buffer pools-the sum of the preset buffer lower limit values of the respective ports) may be determined first, then the number of the sharable buffer pools occupied by the respective ports (the number of the sharable buffer pools is the current buffer occupancy value-the preset buffer lower limit value) may be determined, and finally whether to allocate the buffer to the target port may be determined according to the sum of the total number of the sharable buffer pools and the number of the sharable buffer pools occupied by the respective ports.
When the total amount of the sharable cache pools is greater than the sum of the number of the sharable cache pools occupied by each port, the shared cache pools which can be used for sharing are still available at present, and when a port is needed due to business, the cache pool can be continuously allocated to the port. When the total amount of the sharable cache pools is less than or equal to the sum of the number of the sharable cache pools occupied by each port, it indicates that no cache pool which can be used for sharing exists at present, and at this time, a situation that port services suddenly occupy a large amount of cache may exist, and data received by the port of which the current cache occupation value is greater than or equal to the preset cache lower limit value needs to be discarded, and cache is not allocated, so that the situation that no cache is available for the port is avoided.
In some embodiments, the method may further include the steps of:
acquiring the priority of each port in the transmission equipment;
when the current cache occupation of the target port is larger than a preset cache upper limit value, determining a port to be processed in each port;
discarding the service message received by the port to be processed;
and storing the cache pool allocated to the port to be processed back to the shared cache.
Wherein the priority of the port to be processed is lower than that of the port.
In this embodiment, different priorities may be set for each port, for example, if some ports have a large cache demand, a higher priority may be set, and if some ports have a small cache demand, a lower priority may be set.
When a port with higher priority has a cache burst and needs to occupy a large amount of shared cache, the cache pool occupied by the port with lower priority can be released and returned to the shared cache, and at the moment, the shared cache can allocate more cache pools to the port with higher priority so as to meet the cache requirement of the port with higher priority.
According to the embodiment of the application, by setting the priority of each port, when the consumption of cache resources is high, some services with low priority can be discarded, and the services with high priority can be ensured to be occupied by the cache.
In some embodiments, the method may further include the steps of:
when the current cache occupation of the target port is larger than a preset cache upper limit value, discarding a service message currently received by the target port;
and returning the cache pool for caching the service message in the target port to the shared cache.
In this embodiment, when the current cache occupancy of the target port is greater than the preset cache upper limit, the current service packet may be discarded, and the current cache pool allocated to the target port is all released and returned to the shared cache.
According to the embodiment of the application, the preset upper limit value of the cache is set, when the port occupies a large amount of shared cache suddenly, the service of the port can be discarded and the cache pool allocated to the port is released, and the condition that the port occupies the shared cache in a large amount and other services are not available in the cache is avoided.
Fig. 4 is a flowchart illustrating a second embodiment of a cache allocation method provided in the embodiment of the present application, and as shown in fig. 4, the method may include the following steps:
s401, port data input and requesting a cache pool;
s402, determining the current idle buffer pool, the current port occupied buffer pool number, the maximum port occupied buffer pool number and the minimum port occupied buffer pool number in the first-in first-out queue;
s403, distributing cache data of a cache pool;
s404, buffer overflow and read data release of the buffer pool.
The method includes the steps of determining whether a cache pool needs to be allocated to a port to cache data according to a current idle cache pool, the number of cache pools currently occupied by the port, the maximum number of cache pools capable of being occupied by the port and the minimum number of cache pools capable of being occupied by the port. When the port has a large burst and the cache occupies a large amount, the whole data packet can be discarded, and the cache pool occupied by the current data packet is completely released.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 5 is a schematic structural diagram of a first embodiment of a cache allocation apparatus provided in the embodiment of the present application, and as shown in fig. 5, the cache allocation apparatus includes a de-aggregation module 51, a storage module 52, a flow control processing module 53, a cache allocation module 54, a cache release module 55, and a data reading module 56.
The de-aggregation module 51 is configured to determine a port number of the data packet through de-aggregation. The storage module 52 is used for storing the data packet according to the port number. The flow control processing module 53 is configured to start when a burst of a port reaches a certain amount, control a data inflow, and coordinate a flow rate of the port. The buffer allocation module 54 is used for allocating the port buffer pool. The buffer release module 55 is configured to return the buffer pool after the data reading to the shared buffer. The data reading module 56 is configured to read data cached in the cache pool.
In this embodiment, when a service packet is received, the port number corresponding to the received packet is determined by first solving aggregation, the current occupied capacity, the available maximum cache capacity, the available minimum cache capacity, and the remaining amount of the current cache of the corresponding port are obtained, a cache allocation processing mechanism is entered, whether a cache pool needs to be allocated to store the received packet is determined, and the serial number of the allocated cache pool, the number of caches occupied by the current port, and the number of available caches are recorded. Under the condition of small buffer capacity, some services occupy a large amount of buffers in a burst mode, so that some ports are not available in the buffers, and all data are discarded. Therefore, a flow control mechanism is introduced, and when the burst of the port reaches a certain amount, the flow control mechanism is started, so that the flow of the port is dynamically coordinated.
The buffer release module 55 has two main cases to release the buffer. Firstly, when the port buffer occupies a large amount under the condition that the port burst is too large, the whole Ethernet data packet is discarded, and the buffer occupied by the current packet is completely released. And secondly, when data reading operation is carried out, releasing the cache pool when one cache pool is read, so as to form an idle cache pool.
Fig. 6 is a schematic structural diagram of a cache allocation apparatus according to an embodiment of the present application, where the apparatus may be integrated in a switch, or may be independent of the switch and cooperate with the switch to implement the technical solution. As shown in fig. 6, the buffer allocation apparatus 60 includes a determining module 61, an obtaining module 62, and an allocating module 63.
The determining module 61 is configured to determine a target port of the received service packet according to the received service packet. The obtaining module 62 is configured to obtain a preset upper buffer value, a preset lower buffer value, and a current buffer occupancy of the target port. The allocation module 63 is configured to allocate a target number of cache pools from the shared cache to the target port according to the current cache occupancy, the preset cache upper limit value, and the preset cache lower limit value.
The transmission device comprises more than two ports, the shared cache is used for cache sharing among the ports, the shared cache comprises at least one cache pool, and the cache pool is used for caching the service messages.
In some embodiments, the allocating module 63 may specifically be configured to:
determining the number of cache pools currently occupied by the target port according to the current cache occupation;
if the number is equal to the preset upper limit value of the cache, stopping distributing the cache pool from the shared cache to the target port;
if the number is smaller than the preset cache upper limit value, the sum of the preset cache lower limit values of all the ports is obtained;
determining the target number according to the sum and the number of the remaining cache pools of the shared cache;
and allocating a target number of cache pools from the shared cache to the target ports.
In some embodiments, the apparatus further includes a buffer pool dividing module, configured to:
dividing a shared cache into more than two cache pools and storing the cache pools into a preset queue;
and identifying each cache pool, and determining the sequence number of each cache pool, wherein the sequence number is used for indicating the sequence of the cache pools in the preset queue.
In some embodiments, the apparatus further includes a selecting module, configured to select a target number of buffer pools from the preset queue to allocate to the target port according to an order of each buffer pool in the preset queue.
In some embodiments, the apparatus further comprises a reading module configured to:
acquiring a serial number of a cache pool allocated to a target port;
and reading the service message cached in the cache pool allocated to the target port according to the sequence number.
In some embodiments, the apparatus further includes a store-back module, configured to store the cache pool back into a preset queue after the service packet cached in the cache pool is read.
In some embodiments, the apparatus further comprises a release module configured to:
acquiring a current cache occupation value and a preset cache lower limit value of each port in transmission equipment;
determining the total amount of the sharable cache pool according to the total amount of the cache pool and the sum of the preset cache lower limit values of all the ports;
determining the number of sharable cache pools occupied by the current port according to the current cache occupation value of each port and the lower limit value of the preset cache;
when the total amount of the sharable cache pools is less than the sum of the number of the sharable cache pools occupied by each port, determining the port of which the current cache occupation value is greater than or equal to a preset cache lower limit value as a port to be processed;
discarding the service message received by the port to be processed;
and storing the cache pool allocated to the port to be processed back to the shared cache.
In some embodiments, the apparatus further comprises a priority processing module configured to:
acquiring the priority of each port in the transmission equipment;
when the current cache occupation of the target port is larger than a preset cache upper limit value, determining a port to be processed in each port;
discarding the service message received by the port to be processed;
and storing the cache pool distributed to the port to be processed back to the shared cache.
Wherein the priority of the port to be processed is lower than that of the port.
In some embodiments, the apparatus further comprises a discarding module configured to:
when the current cache occupation of the target port is larger than a preset cache upper limit value, discarding the service message currently received by the target port;
and returning the cache pool for caching the service message in the target port to the shared cache.
The apparatus provided in the embodiment of the present application may be used to execute the method in the above-described embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can all be implemented in the form of software invoked by a processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the determining module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the function of the determining module is called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 7 is a schematic structural diagram of a transmission device according to an embodiment of the present application. As shown in fig. 7, the transmission device 70 includes: at least one processor 71, a memory 72, a bus 73, and a communication interface 74.
Wherein: the processor 71, the communication interface 74 and the memory 72 communicate with each other via a bus 73.
The communication interface 74 is used for communication with other devices. The communication interface includes a communication interface for data transmission.
The processor 71 is configured to execute the instructions stored in the memory, and may specifically execute the relevant steps in the method described in the above embodiments. In particular, the instructions may comprise program code.
The processor 71 may be a central processing unit, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The transmitting device 70 includes one or more processors.
A memory 72 for storing instructions. The memory may comprise high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
The present embodiment also provides a readable storage medium, in which instructions are stored, and when at least one processor of the transmission device executes the computer instructions, the transmission device executes the cache allocation method provided in the foregoing various embodiments.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and succeeding related objects are in a relationship of "division". "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for convenience of description and distinction and are not intended to limit the scope of the embodiments of the present application. In the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. A buffer allocation method is applied to a transmission device in communication, wherein the transmission device transmits more than two low-rate services in one channel, and the method comprises the following steps:
determining a target port of the received service message according to the received service message;
acquiring a preset cache upper limit value, a preset cache lower limit value and current cache occupation of the target port;
and distributing a target number of cache pools to the target port from a shared cache according to the current cache occupation, a preset cache upper limit value and a preset cache lower limit value, wherein the shared cache is used for cache sharing among all ports, the shared cache comprises more than two cache pools, and the cache pools are used for caching service messages.
2. The method of claim 1, wherein allocating a target number of cache pools from a shared cache to the target port according to the current cache occupancy, a preset cache upper limit value and a preset cache lower limit value comprises:
determining the number of the cache pools currently occupied by the target port according to the current cache occupation;
if the number is equal to the preset cache upper limit value, stopping allocating a cache pool from the shared cache to the target port;
if the number is smaller than the preset cache upper limit value, acquiring the sum of the preset cache lower limit values of all the ports;
determining the target number according to the sum and the number of the current residual cache pools of the shared cache;
and allocating the target number of cache pools from the shared cache to the target port.
3. The method of claim 1, further comprising:
dividing the shared cache into more than two cache pools and storing the cache pools into a preset queue;
and identifying each cache pool, and determining the sequence number of each cache pool, wherein the sequence number is used for indicating the sequence of the cache pools in the preset queue.
4. The method of claim 3, further comprising:
and selecting a target number of buffer pools from the preset queue to distribute to the target ports according to the sequence of each buffer pool in the preset queue.
5. The method according to any one of claims 1-4, further comprising:
acquiring a sequence number of a cache pool allocated to the target port;
and reading the service message cached in the cache pool allocated to the target port according to the sequence number.
6. The method of claim 5, further comprising:
and after the service message cached in the cache pool is read, storing the cache pool back to the preset queue.
7. The method according to any one of claims 1-4, further comprising:
acquiring a current cache occupation value and a preset cache lower limit value of each port in the transmission equipment;
determining the total amount of the sharable cache pool according to the total amount of the cache pool and the sum of the preset cache lower limit values of the ports;
determining the number of sharable cache pools occupied by the current port according to the current cache occupation value of each port and the lower limit value of the preset cache;
when the total amount of the sharable cache pools is less than the sum of the number of the sharable cache pools occupied by each port, determining the port of which the current cache occupation value is greater than or equal to the preset cache lower limit value as a port to be processed;
discarding the service message received by the port to be processed;
and storing the cache pool allocated to the port to be processed back to the shared cache.
8. The method according to any one of claims 1-4, further comprising:
when the current cache occupation of the target port is larger than the preset cache upper limit value, discarding the service message currently received by the target port;
and returning the cache pool caching the service message in the target port to the shared cache.
9. A cache allocation apparatus, comprising:
the determining module is used for determining a target port of the received service message according to the received service message;
the acquisition module is used for acquiring a preset cache upper limit value, a preset cache lower limit value and current cache occupation of the target port;
and the allocation module is used for allocating a target number of cache pools to the target ports from a shared cache according to the current cache occupation, a preset cache upper limit value and a preset cache lower limit value, wherein the shared cache is used for performing cache sharing among the ports, the shared cache comprises at least one cache pool, and the cache pool is used for caching the service messages.
10. A transmission apparatus, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores execution instructions;
the processor executes execution instructions stored by the memory to implement the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210387595.5A CN115051958A (en) | 2022-04-14 | 2022-04-14 | Cache allocation method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210387595.5A CN115051958A (en) | 2022-04-14 | 2022-04-14 | Cache allocation method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115051958A true CN115051958A (en) | 2022-09-13 |
Family
ID=83157176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210387595.5A Pending CN115051958A (en) | 2022-04-14 | 2022-04-14 | Cache allocation method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115051958A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115878334A (en) * | 2023-03-08 | 2023-03-31 | 深圳云豹智能有限公司 | Data caching processing method and system, storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1798094A (en) * | 2004-12-23 | 2006-07-05 | 华为技术有限公司 | Method of using buffer area |
CN101364948A (en) * | 2008-09-08 | 2009-02-11 | 中兴通讯股份有限公司 | Method for dynamically allocating cache |
CN101873269A (en) * | 2010-06-24 | 2010-10-27 | 杭州华三通信技术有限公司 | Data retransmission device and method for distributing buffer to ports |
CN105610729A (en) * | 2014-11-19 | 2016-05-25 | 中兴通讯股份有限公司 | Buffer allocation method, buffer allocation device and network processor |
CN105812285A (en) * | 2016-04-29 | 2016-07-27 | 华为技术有限公司 | Port congestion management method and device |
-
2022
- 2022-04-14 CN CN202210387595.5A patent/CN115051958A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1798094A (en) * | 2004-12-23 | 2006-07-05 | 华为技术有限公司 | Method of using buffer area |
CN101364948A (en) * | 2008-09-08 | 2009-02-11 | 中兴通讯股份有限公司 | Method for dynamically allocating cache |
CN101873269A (en) * | 2010-06-24 | 2010-10-27 | 杭州华三通信技术有限公司 | Data retransmission device and method for distributing buffer to ports |
CN105610729A (en) * | 2014-11-19 | 2016-05-25 | 中兴通讯股份有限公司 | Buffer allocation method, buffer allocation device and network processor |
CN105812285A (en) * | 2016-04-29 | 2016-07-27 | 华为技术有限公司 | Port congestion management method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115878334A (en) * | 2023-03-08 | 2023-03-31 | 深圳云豹智能有限公司 | Data caching processing method and system, storage medium and electronic equipment |
CN115878334B (en) * | 2023-03-08 | 2023-05-12 | 深圳云豹智能有限公司 | Data caching processing method and system, storage medium and electronic equipment thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9225668B2 (en) | Priority driven channel allocation for packet transferring | |
US9571402B2 (en) | Congestion control and QoS in NoC by regulating the injection traffic | |
US20140036680A1 (en) | Method to Allocate Packet Buffers in a Packet Transferring System | |
CN111327391B (en) | Time division multiplexing method, device, system and storage medium | |
CN109660376B (en) | Virtual network mapping method, equipment and storage medium | |
CN102904835B (en) | System bandwidth distribution method and device | |
JP4408375B2 (en) | System, method and logic for short round robin scheduling in fast switching environment | |
CN107404443B (en) | Queue cache resource control method and device, server and storage medium | |
EP0666665A2 (en) | Method and apparatus for dynamically determining and allocating shared resource access quota | |
CN108984280B (en) | Method and device for managing off-chip memory and computer-readable storage medium | |
CN111836370B (en) | Resource reservation method and equipment based on competition | |
CN111857992B (en) | Method and device for allocating linear resources in Radosgw module | |
WO2023184991A1 (en) | Traffic management and control method and apparatus, and device and readable storage medium | |
CN115051958A (en) | Cache allocation method, device and equipment | |
JP2004242333A (en) | System, method, and logic for managing memory resources shared in high-speed exchange environment | |
CN101527686A (en) | Method of data exchange and equipment | |
JP4408376B2 (en) | System, method and logic for queuing packets to be written to memory for exchange | |
US20230117851A1 (en) | Method and Apparatus for Queue Scheduling | |
CN115378885B (en) | Virtual machine service network bandwidth management method and device under super fusion architecture | |
CN114338559B (en) | Message order preserving method and device | |
US9846658B2 (en) | Dynamic temporary use of packet memory as resource memory | |
CN110955522B (en) | Resource management method and system for coordination performance isolation and data recovery optimization | |
CN114090199A (en) | Multi-tenant application program isolation method and device based on SOC intelligent network card | |
CN116954874A (en) | Resource allocation method, device, equipment and storage medium | |
CN113438185A (en) | Bandwidth allocation method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220913 |