Nothing Special   »   [go: up one dir, main page]

CN117608846A - Storage resource processing method and DPU - Google Patents

Storage resource processing method and DPU Download PDF

Info

Publication number
CN117608846A
CN117608846A CN202311629298.8A CN202311629298A CN117608846A CN 117608846 A CN117608846 A CN 117608846A CN 202311629298 A CN202311629298 A CN 202311629298A CN 117608846 A CN117608846 A CN 117608846A
Authority
CN
China
Prior art keywords
resource
storage
linked list
target
ddr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311629298.8A
Other languages
Chinese (zh)
Inventor
范东生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311629298.8A priority Critical patent/CN117608846A/en
Publication of CN117608846A publication Critical patent/CN117608846A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a storage resource processing method and a DPU, wherein the method executed in hardware comprises the following steps: acquiring a storage resource type corresponding to a current resource allocation request, and inquiring whether a target storage resource corresponding to the resource allocation request is currently contained in a local on-chip cache unit corresponding to the storage resource type; if the on-chip cache unit does not contain the target storage resource, judging whether the current read-write state of the external DDR is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of a resource allocation request, and updating the read-write state of the DDR. According to the method and the device, when the hardware storage resources are requested, the storage resources can be provided for a requester in time, the reliability and the expandability of the storage resource allocation based on the hardware can be effectively improved, and the processing efficiency of the storage resources based on the hardware can be effectively improved.

Description

Storage resource processing method and DPU
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a storage resource processing method and a DPU.
Background
With the rapid development of technology, the data generation speed exceeds the processing capacity data of the conventional equipment, and for a data center taking data as a center, the performance of a CPU has limitation on the growth speed of the data, and the DPU appears, so that the operation of an infrastructure can be unloaded from the CPU to a data processor DPU (Data Processing Unit), the combination of software definition and hardware acceleration in the aspects of safety, communication, storage, virtualization and the like is realized, the resources of the CPU are released, and the application requirements are better supported.
At present, when storage resources in hardware such as a DPU are requested, the existence of idle storage resources cannot be guaranteed in real time, so that the storage resources cannot be provided to a requester in time, further, the storage resource allocation reliability of the hardware such as the DPU is low, the flexibility is poor, and the processing efficiency requirement of the storage resources cannot be met.
Disclosure of Invention
In view of this, embodiments of the present application provide a storage resource processing method and a DPU to obviate or ameliorate one or more of the disadvantages of the prior art.
One aspect of the present application provides a storage resource processing method, executed in hardware, including:
acquiring a storage resource type corresponding to a current resource allocation request, and inquiring whether a target storage resource corresponding to the resource allocation request is currently contained in a local on-chip cache unit corresponding to the storage resource type;
if the on-chip cache unit does not contain the target storage resource, judging whether the current read-write state of the external DDR is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read-write state of the DDR.
In some embodiments of the present application, the obtaining a storage resource type corresponding to a current resource allocation request, querying whether a target storage resource corresponding to the resource allocation request is currently included in a local on-chip cache unit corresponding to the storage resource type, includes:
acquiring a storage resource type corresponding to a current resource allocation request;
if the storage resource type is a storage page resource, inquiring whether a storage page index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request or not;
if the storage resource type is a linked list resource, inquiring whether a linked list index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request.
In some embodiments of the present application, the storage page index cache includes: distributing a storage page index cache and recycling the storage page index cache;
correspondingly, if the storage resource type is a storage page resource, querying whether a storage page index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request or not includes:
if the storage resource type is storage page resource, judging whether the number of the storage page index addresses currently stored in the distribution storage page index cache is smaller than a first threshold value according to a distribution page index counter corresponding to the distribution storage page index cache, and if so, inquiring whether the recovery storage page index cache contains target storage page resources corresponding to the resource allocation request.
In some embodiments of the present application, the linked list index cache includes: a distribution linked list index cache and a recovery linked list index cache;
correspondingly, if the storage resource type is a linked list resource, querying whether a linked list index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request or not includes:
if the storage resource type is a linked list resource, judging whether the number of the linked list index addresses currently stored in the distributed list index buffer is smaller than a second threshold value according to a distributed list index counter corresponding to the distributed list index buffer, and if yes, inquiring whether the recovery linked list index buffer contains the target linked list resource corresponding to the resource allocation request.
In some embodiments of the present application, if the on-chip cache unit does not include the target storage resource, determining whether an external DDR current read/write state is in a resource readable state, if yes, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read/write state of the DDR, including:
if the recovered storage page index cache does not contain the target storage page resource, judging whether a read pointer corresponding to a current storage page index address in the external DDR is smaller than a write pointer corresponding to the storage page index address, if so, acquiring the target storage page index address from the DDR to allocate the target storage page index address to a request end of the resource allocation request, and updating the read pointer corresponding to the storage page index address in the DDR.
In some embodiments of the present application, if the on-chip cache unit does not include the target storage resource, determining whether an external DDR current read/write state is in a resource readable state, if yes, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read/write state of the DDR, including:
if the recovery linked list index cache does not contain the target linked list resource, judging whether a read pointer corresponding to a current linked list index address in an external DDR is smaller than a write pointer corresponding to the linked list index address, if so, acquiring the target linked list index address from the DDR to distribute the target linked list index address to a request end of the resource distribution request, and updating the read pointer corresponding to the linked list index address in the DDR.
In some embodiments of the present application, further comprising:
if the number of the storage page index addresses currently stored in the distribution storage page index cache is judged to be equal to or larger than the first threshold value, the target storage page index address is read from the distribution storage page index cache so as to be distributed to the request end of the resource distribution request;
and if the recovery storage page index cache is judged to contain the target storage page resources corresponding to the resource allocation request, reading the target storage page index address from the recovery storage page index cache to allocate the target storage page index address to a request end of the resource allocation request.
In some embodiments of the present application, further comprising:
if the number of the link list index addresses currently stored in the distribution link list index cache is judged to be equal to or larger than the first threshold value, the target link list index address is read from the distribution link list index cache so as to be distributed to the request end of the resource distribution request;
and if the recovery linked list index cache is judged to contain the target linked list resource corresponding to the resource allocation request, reading the target linked list index address from the recovery linked list index cache to allocate the target linked list index address to a request end of the resource allocation request.
In some embodiments of the present application, further comprising:
if the current to-be-deleted resource identifier is obtained, extracting the to-be-recovered storage resource in the identifier chain table corresponding to the resource identifier;
if the type of the storage resource to be recycled is a storage page index address, judging whether the number of idle storage bits in the recycled storage page index cache is smaller than a third threshold value, if so, storing the storage page index address to be recycled into the DDR, and updating a write pointer corresponding to the storage page index address in the DDR; if not, storing the storage page index address to be recycled to the recycling storage page index cache;
if the type of the storage resource to be recycled is a linked list index address, judging whether the number of idle storage bits in the recycling linked list index cache is smaller than a fourth threshold value, if so, storing the linked list index address to be recycled into the DDR, and updating a write pointer corresponding to the linked list index address in the DDR; if not, the linked list index address to be recovered is stored in the recovery linked list index cache.
Another aspect of the present application provides a storage resource processing apparatus, including:
the on-chip cache processing module is used for acquiring a storage resource type corresponding to a current resource allocation request, and inquiring whether a target storage resource corresponding to the resource allocation request is currently contained in a local on-chip cache unit corresponding to the storage resource type;
and the DDR processing module is used for judging whether the current read-write state of the external DDR is in a resource readable state or not if the on-chip cache unit does not contain the target storage resource, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read-write state of the DDR.
The third aspect of the present application further provides a DPU, provided with a storage resource processing device;
the storage resource processing device is used for executing the storage resource processing method.
The storage resource processing method is executed in hardware, and is used for inquiring whether a target storage resource corresponding to a resource allocation request is currently contained in a local on-chip cache unit corresponding to a storage resource type by acquiring the storage resource type corresponding to the current resource allocation request; if the on-chip cache unit does not contain the target storage resource, judging whether the current read-write state of the external DDR is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read-write state of the DDR, so that the storage resource can be provided for a requester in time when the hardware storage resource is requested, the reliability and the expandability of the allocation of the hardware-based storage resource can be effectively improved, the processing efficiency of the hardware-based storage resource can be effectively improved, the application range and the processing performance of the hardware can be improved, and further the high performance and completeness inside the hardware such as DPU can be ensured.
Additional advantages, objects, and features of the application will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present application are not limited to the above-detailed description, and that the above and other objects that can be achieved with the present application will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings are included to provide a further understanding of the application, and are incorporated in and constitute a part of this application. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the application. Corresponding parts in the drawings may be exaggerated, i.e. made larger relative to other parts in an exemplary device actually manufactured according to the present application, for convenience in showing and describing some parts of the present application. In the drawings:
fig. 1 is a schematic flow chart of a first method for processing storage resources according to an embodiment of the present application.
Fig. 2 is a second flowchart of a storage resource processing method according to an embodiment of the present application.
Fig. 3 is a third flowchart of a storage resource processing method according to an embodiment of the present application.
Fig. 4 is a fourth flowchart of a storage resource processing method according to an embodiment of the present application.
FIG. 5 is a schematic diagram of storage resource execution logic in an application example of the present application.
Fig. 6 is a flowchart of a storage resource allocation method in an application example of the present application.
Fig. 7 is a fifth flowchart of a storage resource processing method according to an embodiment of the present application.
Fig. 8 is a flow chart of a storage resource recycling method in an application example of the present application.
Fig. 9 is a schematic structural diagram of a storage resource processing device in an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a DPU in an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the embodiments and the accompanying drawings. The exemplary embodiments of the present application and their descriptions are used herein to explain the present application, but are not intended to be limiting of the present application.
It should be noted here that, in order to avoid obscuring the present application due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present application are shown in the drawings, while other details not greatly related to the present application are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present application will be described with reference to the drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
In order to solve the problems that when storage resources in hardware such as a DPU (data processing unit) are requested, idle storage resources cannot be guaranteed to exist in real time, so that the storage resources cannot be provided for a requester in time, and the like, the embodiment of the application provides a storage resource processing method, a storage resource processing device for executing the storage resource processing method and the DPU, and provides the storage resources for the requester in time, so that the reliability and the expandability of the allocation of the storage resources based on the hardware can be effectively improved.
The following examples are provided to illustrate the invention in more detail.
Based on this, the embodiment of the application provides a storage resource processing method that can be implemented by a storage resource processing device in hardware, referring to fig. 1, the storage resource processing method specifically includes the following contents:
step 100: and acquiring a storage resource type corresponding to the current resource allocation request, and inquiring whether a target storage resource corresponding to the resource allocation request is currently contained in a local on-chip cache unit corresponding to the storage resource type.
In step 100, the storage resource processing device extracts the storage resource types indicated in the resource allocation request after receiving the resource allocation request, where the storage resource types may be set to be multiple according to actual application requirements, and in one or more embodiments of the present application, the storage resource types may at least include a storage page and a linked list.
In one or more embodiments of the present application, the hardware, such as the DPU, is locally provided with an on-chip cache unit as one of the storage resource providing units, and in order to meet the allocation timeliness requirement of the storage resource, in this embodiment of the present application, a memory DDR (double rate synchronous dynamic random access memory) external to the hardware, such as the DPU, is further designed as a second storage resource providing unit, which is specifically described in step 200 below.
Step 200: if the on-chip cache unit does not contain the target storage resource, judging whether the current read-write state of the external DDR is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read-write state of the DDR.
In one or more embodiments of the present application, the target storage resource refers to an index address for satisfying the resource allocation request. If the storage resource type is a storage page, the target storage resource refers to a target storage page resource, and specifically includes a storage page index address; if the storage resource type is a linked list, the target storage resource refers to a target linked list resource, and specifically includes a linked list index address.
In one or more embodiments of the present application, the storage page index address may be written as a page, and the linked list index address may be written as a line.
As can be seen from the foregoing description, the storage resource processing method provided by the embodiment of the present application can provide the storage resource to the requesting party in time when the hardware storage resource is requested, so that the reliability and expandability of the allocation of the storage resource based on hardware can be effectively improved, the processing efficiency of the storage resource based on hardware can be effectively improved, the application range and the processing performance of the hardware can be improved, and further, the high performance and completeness inside the hardware such as the DPU can be ensured.
In order to further improve the reliability and expandability of the allocation of the storage resources based on the hardware, in the storage resource processing method provided in the embodiment of the present application, referring to fig. 2, step 100 in the storage resource processing method specifically includes the following:
step 110: acquiring a storage resource type corresponding to a current resource allocation request;
step 120, if the storage resource type is a storage page resource, inquiring whether a storage page index cache preset in the local on-chip cache unit contains a target storage resource corresponding to the resource allocation request, and if not, executing step 200.
Step 130: if the storage resource type is a linked list resource, inquiring whether a linked list index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request, and if not, executing step 200.
In order to further improve the reliability and effectiveness of the application of the storage page index cache, in the storage resource processing method provided in the embodiment of the present application, the storage page index cache includes: distributing a storage page index cache and recycling the storage page index cache; the distribution storage page index cache may be written as dispatch_page_fifo, and the recycle storage page index cache may be written as recycle_page_fifo.
Referring to fig. 3, step 120 in the storage resource processing method specifically includes the following:
step 121: if the storage resource type is a storage page resource, judging whether the number of the storage page index addresses currently stored in the distribution storage page index cache is smaller than a first threshold according to a distribution page index counter corresponding to the distribution storage page index cache, if so, executing step 122, and if not, executing step 123;
in step 121, the first threshold may be selected to be a positive integer close to 0.
It will be appreciated that the dispatch_page_count may be written as a dispatch_page_count, and that the dispatch_page_count may be pre-initialized to pre-allocate the storage Page index address Page prior to step 100; the subsequent counter will accumulate until the storage resources are allocated, and the counter will remain inactive.
Step 122: and inquiring whether the recovery storage page index cache contains the target storage page resource corresponding to the resource allocation request, if so, executing step 124, and if not, executing step 200.
Step 123: and if the number of the storage page index addresses currently stored in the distribution storage page index cache is judged to be equal to or larger than the first threshold value, reading the target storage page index address from the distribution storage page index cache to distribute the target storage page index address to the request end of the resource distribution request.
Step 124: and if the recovery storage page index cache is judged to contain the target storage page resources corresponding to the resource allocation request, reading the target storage page index address from the recovery storage page index cache to allocate the target storage page index address to a request end of the resource allocation request.
In order to further improve the application reliability and effectiveness of the linked list index cache, in the storage resource processing method provided in the embodiment of the present application, the linked list index cache includes: a distribution linked list index cache and a recovery linked list index cache; the distribution chain table index buffer may be written as a dispatch_line_fifo, and the recovery chain table index buffer may be written as a recovery_line_fifo.
Referring to fig. 3, step 130 in the storage resource processing method specifically includes the following:
step 131: if the storage resource type is a linked list resource, judging whether the number of the linked list index addresses currently stored in the distributed list index buffer is smaller than a second threshold according to the distributed list index counter corresponding to the distributed list index buffer, if yes, executing step 132, and if not, executing step 133.
In step 131, the second threshold may be a positive integer close to 0, where the second threshold may be the same as or different from the first threshold, and may specifically be set according to the actual application requirement.
It will be appreciated that the dispatch_line_count may be written as a dispatch_line_count, and that the dispatch_line_count may be pre-initialized to pre-allocate the linked list index address line prior to step 100; the subsequent counter will accumulate until the storage resources are allocated, and the counter will remain inactive.
Step 132: and inquiring whether the recovery linked list index cache contains target linked list resources corresponding to the resource allocation request, if so, executing step 134, and if not, executing step 200.
Step 133: and if the number of the link list index addresses stored in the distribution link list index buffer memory at present is equal to or greater than the first threshold value through judgment, reading the target link list index address from the distribution link list index buffer memory to distribute the target link list index address to a request end of the resource distribution request.
Step 134: and if the recovery linked list index cache is judged to contain the target linked list resource corresponding to the resource allocation request, reading the target linked list index address from the recovery linked list index cache to allocate the target linked list index address to a request end of the resource allocation request.
In order to further improve timeliness, reliability and scalability of allocation of storage page index addresses, referring to fig. 4, if it is found by query in step 122 that the reclaimed storage page index cache does not include the target storage page resource corresponding to the resource allocation request, the execution step 200 specifically includes the following steps:
step 210: if the recovered storage page index cache does not contain the target storage page resource, judging whether a read pointer corresponding to a current storage page index address in the external DDR is smaller than a write pointer corresponding to the storage page index address, if so, acquiring the target storage page index address from the DDR to allocate the target storage page index address to a request end of the resource allocation request, and updating the read pointer corresponding to the storage page index address in the DDR.
It can be understood that if it is judged that the read pointer corresponding to the current storage page index address in the DDR is equal to or greater than the write pointer corresponding to the storage page index address, it indicates that there is no allocable storage page resource in the current DDR, so that a corresponding notification message can be sent to the request end.
In order to further improve timeliness, reliability and scalability of linked list index address allocation, referring to fig. 4, if it is found by the query in step 132 that the recovery linked list index cache does not include the target linked list resource corresponding to the resource allocation request, the implementation step 200 specifically includes the following steps:
step 220: if the recovery linked list index cache does not contain the target linked list resource, judging whether a read pointer corresponding to a current linked list index address in an external DDR is smaller than a write pointer corresponding to the linked list index address, if so, acquiring the target linked list index address from the DDR to distribute the target linked list index address to a request end of the resource distribution request, and updating the read pointer corresponding to the linked list index address in the DDR.
It can be understood that if it is judged that the read pointer corresponding to the current linked list index address in the DDR is equal to or greater than the write pointer corresponding to the linked list index address, it indicates that no linked list resource is available in the current DDR, so that a corresponding notification message can be sent to the request terminal.
In order to further explain the execution process of the storage resource allocation, the present application further provides a specific application example of a storage resource allocation method, referring to fig. 5 and fig. 6, where the storage resource allocation method specifically includes the following contents:
and (3) resource allocation: and issuing a storage Page index address Page and issuing a linked list index address Line.
1. Initializing a distribution Page index counter dispatch_page_count and a distribution Page linked list index counter dispatch_line_count counter to pre-allocate pages and lines; the subsequent counter will accumulate until the storage resources are allocated, and the counter will remain inactive.
2. When the distributed storage page index cache dispatch_page_fifo is empty, the hardware actively reads the storage page index address page from the reclaimed storage page index cache recycle_page_fifo, and if the reclaimed storage page index cache recycle_page_fifo is empty, the DDR read pointer is smaller than the write pointer, reads the storage page index address page from the DDR, and updates the storage page index address page read pointer.
3. When the distribution list index buffer dispatch_line_fifo is empty, the hardware will actively read the list index address line from the recovery list index buffer recovery_line_fifo, if the recovery list index buffer recovery_line_fifo is empty, the DDR read pointer is smaller than the write pointer, the line is read from the DDR, and the line read pointer is updated.
In the above application examples, the explanation of each parameter is shown in table 1.
TABLE 1
In order to further improve the effectiveness and reliability of storage resource allocation, the embodiment of the present application further improves the utilization rate of storage resources and the reliability of allocation by timely recovering the storage resources, and in a storage resource processing method provided in the embodiment of the present application, referring to fig. 7, the storage resource processing method further specifically includes the following contents:
step 300: and if the current to-be-deleted resource identifier is obtained, extracting the to-be-recovered storage resource in the identifier chain table corresponding to the resource identifier.
Step 410: if the type of the storage resource to be reclaimed is a storage page index address, judging whether the number of the idle storage bits in the reclaimed storage page index cache is smaller than a third threshold value, if yes, executing step 420: storing the storage page index address to be recovered into the DDR, and updating a write pointer corresponding to the storage page index address in the DDR; if not, then step 430 is performed: and storing the storage page index address to be recycled to the recycling storage page index cache.
Step 510: if the type of the storage resource to be reclaimed is a linked list index address, judging whether the number of idle storage bits in the reclaimed linked list index cache is smaller than a fourth threshold value, if yes, executing step 520: storing the linked list index address to be recovered into the DDR, and updating a write pointer corresponding to the linked list index address in the DDR; if not, then step 530 is performed: and storing the linked list index address to be recycled into the recycling linked list index cache.
Specifically, the third threshold and the fourth threshold may be positive integers close to 0, and the values of the first threshold to the fourth threshold may be the same or different, and may be specifically set according to actual application requirements.
In order to further illustrate the execution process of the storage resource reclamation, the present application further provides a specific application example of a storage resource reclamation method, referring to fig. 8, where the storage resource reclamation method specifically includes the following contents:
and (3) resource recovery: page is recovered and Line is recovered.
1, when hardware deletes ID, writing line and page in ID chain table into buffer;
2, when the recovery_page_fifo or the recovery_line_fifo is full, actively storing the page or line into the DDR, and updating the write pointer of the page or line.
That is, by designing the distribution and recovery modes of pages and lines in the hardware in the DPU, when the hardware storage resources are requested, the storage resources can be provided to the requesting party in time, so that the reliability and expandability of the allocation of the storage resources based on the hardware can be effectively improved, the processing efficiency of the storage resources based on the hardware can be effectively improved, the application range and the processing performance of the hardware can be improved, and further, the high performance and completeness of the hardware such as the DPU can be ensured.
The present application further provides a storage resource processing device for executing all or part of the storage resource processing method, referring to fig. 9, where the storage resource processing device specifically includes the following contents:
the on-chip cache processing module 10 is configured to obtain a storage resource type corresponding to a current resource allocation request, and query whether a target storage resource corresponding to the resource allocation request is currently included in a local on-chip cache unit corresponding to the storage resource type.
And the DDR processing module 20 is configured to determine whether an external DDR current read/write state is in a resource readable state if the on-chip cache unit does not include the target storage resource, and if so, acquire the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and update the read/write state of the DDR.
The embodiment of the storage resource processing device provided in the present application may be specifically used to execute the processing flow of the embodiment of the storage resource processing method in the above embodiment, and the functions thereof are not described herein in detail, and reference may be made to the detailed description of the embodiment of the storage resource processing method.
As can be seen from the foregoing description, the storage resource processing device provided by the embodiment of the present application can provide the storage resource to the requesting party in time when the hardware storage resource is requested, so that the reliability and expandability of the allocation of the storage resource based on hardware can be effectively improved, the processing efficiency of the storage resource based on hardware can be effectively improved, the application range and the processing performance of the hardware can be improved, and further, the high performance and completeness inside the hardware such as the DPU can be ensured.
The embodiment of the application also provides a DPU, see fig. 10, which is provided with a storage resource processing device, wherein the storage resource processing device is used for executing all or part of the content of the storage resource processing method.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The features described and/or illustrated in this application for one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The foregoing description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the embodiment of the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A storage resource processing method, characterized by being executed in hardware, comprising:
acquiring a storage resource type corresponding to a current resource allocation request, and inquiring whether a target storage resource corresponding to the resource allocation request is currently contained in a local on-chip cache unit corresponding to the storage resource type;
if the on-chip cache unit does not contain the target storage resource, judging whether the current read-write state of the external DDR is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to a request end of the resource allocation request, and updating the read-write state of the DDR.
2. The method for processing storage resources according to claim 1, wherein the obtaining the storage resource type corresponding to the current resource allocation request, querying whether the target storage resource corresponding to the resource allocation request is currently included in a local on-chip cache unit corresponding to the storage resource type, includes:
acquiring a storage resource type corresponding to a current resource allocation request;
if the storage resource type is a storage page resource, inquiring whether a storage page index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request or not;
if the storage resource type is a linked list resource, inquiring whether a linked list index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request.
3. The storage resource processing method according to claim 2, wherein the storage page index cache includes: distributing a storage page index cache and recycling the storage page index cache;
correspondingly, if the storage resource type is a storage page resource, querying whether a storage page index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request or not includes:
if the storage resource type is storage page resource, judging whether the number of the storage page index addresses currently stored in the distribution storage page index cache is smaller than a first threshold value according to a distribution page index counter corresponding to the distribution storage page index cache, and if so, inquiring whether the recovery storage page index cache contains target storage page resources corresponding to the resource allocation request.
4. The storage resource processing method according to claim 3, wherein the linked list index cache includes: a distribution linked list index cache and a recovery linked list index cache;
correspondingly, if the storage resource type is a linked list resource, querying whether a linked list index cache preset in a local on-chip cache unit contains a target storage resource corresponding to the resource allocation request or not includes:
if the storage resource type is a linked list resource, judging whether the number of the linked list index addresses currently stored in the distributed list index buffer is smaller than a second threshold value according to a distributed list index counter corresponding to the distributed list index buffer, and if yes, inquiring whether the recovery linked list index buffer contains the target linked list resource corresponding to the resource allocation request.
5. The method of claim 3, wherein if the on-chip cache unit does not include the target storage resource, determining whether the external DDR current read/write state is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to the request end of the resource allocation request, and updating the read/write state of the DDR, includes:
if the recovered storage page index cache does not contain the target storage page resource, judging whether a read pointer corresponding to a current storage page index address in the external DDR is smaller than a write pointer corresponding to the storage page index address, if so, acquiring the target storage page index address from the DDR to allocate the target storage page index address to a request end of the resource allocation request, and updating the read pointer corresponding to the storage page index address in the DDR.
6. The method of claim 4, wherein if the on-chip cache unit does not include the target storage resource, determining whether the external DDR current read/write state is in a resource readable state, if so, acquiring the target storage resource from the DDR to allocate the target storage resource to the request end of the resource allocation request, and updating the read/write state of the DDR, comprises:
if the recovery linked list index cache does not contain the target linked list resource, judging whether a read pointer corresponding to a current linked list index address in an external DDR is smaller than a write pointer corresponding to the linked list index address, if so, acquiring the target linked list index address from the DDR to distribute the target linked list index address to a request end of the resource distribution request, and updating the read pointer corresponding to the linked list index address in the DDR.
7. The storage resource processing method according to claim 3, further comprising:
if the number of the storage page index addresses currently stored in the distribution storage page index cache is judged to be equal to or larger than the first threshold value, the target storage page index address is read from the distribution storage page index cache so as to be distributed to the request end of the resource distribution request;
and if the recovery storage page index cache is judged to contain the target storage page resources corresponding to the resource allocation request, reading the target storage page index address from the recovery storage page index cache to allocate the target storage page index address to a request end of the resource allocation request.
8. The storage resource processing method according to claim 4, further comprising:
if the number of the link list index addresses currently stored in the distribution link list index cache is judged to be equal to or larger than the first threshold value, the target link list index address is read from the distribution link list index cache so as to be distributed to the request end of the resource distribution request;
and if the recovery linked list index cache is judged to contain the target linked list resource corresponding to the resource allocation request, reading the target linked list index address from the recovery linked list index cache to allocate the target linked list index address to a request end of the resource allocation request.
9. The storage resource processing method according to claim 4, further comprising:
if the current to-be-deleted resource identifier is obtained, extracting the to-be-recovered storage resource in the identifier chain table corresponding to the resource identifier;
if the type of the storage resource to be recycled is a storage page index address, judging whether the number of idle storage bits in the recycled storage page index cache is smaller than a third threshold value, if so, storing the storage page index address to be recycled into the DDR, and updating a write pointer corresponding to the storage page index address in the DDR; if not, storing the storage page index address to be recycled to the recycling storage page index cache;
if the type of the storage resource to be recycled is a linked list index address, judging whether the number of idle storage bits in the recycling linked list index cache is smaller than a fourth threshold value, if so, storing the linked list index address to be recycled into the DDR, and updating a write pointer corresponding to the linked list index address in the DDR; if not, the linked list index address to be recovered is stored in the recovery linked list index cache.
10. A DPU, characterized by a memory resource processing device;
the storage resource processing device is configured to execute the storage resource processing method of any one of claims 1 to 9.
CN202311629298.8A 2023-11-30 2023-11-30 Storage resource processing method and DPU Pending CN117608846A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311629298.8A CN117608846A (en) 2023-11-30 2023-11-30 Storage resource processing method and DPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311629298.8A CN117608846A (en) 2023-11-30 2023-11-30 Storage resource processing method and DPU

Publications (1)

Publication Number Publication Date
CN117608846A true CN117608846A (en) 2024-02-27

Family

ID=89949541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311629298.8A Pending CN117608846A (en) 2023-11-30 2023-11-30 Storage resource processing method and DPU

Country Status (1)

Country Link
CN (1) CN117608846A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8266344B1 (en) * 2009-09-24 2012-09-11 Juniper Networks, Inc. Recycling buffer pointers using a prefetch buffer
CN109074313A (en) * 2016-02-03 2018-12-21 斯瓦姆64有限责任公司 Caching and method
CN112835510A (en) * 2019-11-25 2021-05-25 北京灵汐科技有限公司 Method and device for controlling storage format of on-chip storage resource
CN114766090A (en) * 2019-12-25 2022-07-19 华为技术有限公司 Message caching method, integrated circuit system and storage medium
CN115905047A (en) * 2022-12-05 2023-04-04 上海交通大学 Heterogeneous memory management system and method for on-chip stacked memory
CN116775560A (en) * 2023-08-22 2023-09-19 北京象帝先计算技术有限公司 Write distribution method, cache system, system on chip, electronic component and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8266344B1 (en) * 2009-09-24 2012-09-11 Juniper Networks, Inc. Recycling buffer pointers using a prefetch buffer
CN109074313A (en) * 2016-02-03 2018-12-21 斯瓦姆64有限责任公司 Caching and method
CN112835510A (en) * 2019-11-25 2021-05-25 北京灵汐科技有限公司 Method and device for controlling storage format of on-chip storage resource
CN114766090A (en) * 2019-12-25 2022-07-19 华为技术有限公司 Message caching method, integrated circuit system and storage medium
CN115905047A (en) * 2022-12-05 2023-04-04 上海交通大学 Heterogeneous memory management system and method for on-chip stacked memory
CN116775560A (en) * 2023-08-22 2023-09-19 北京象帝先计算技术有限公司 Write distribution method, cache system, system on chip, electronic component and electronic equipment

Similar Documents

Publication Publication Date Title
US10990540B2 (en) Memory management method and apparatus
CN110109868B (en) Method, apparatus and computer program product for indexing files
CN103607428B (en) A kind of method and apparatus for accessing shared drive
CN108139966B (en) Method for managing address conversion bypass cache and multi-core processor
CN102331986A (en) Database cache management method and database server
CN114238518B (en) Data processing method, device, equipment and storage medium
CN103559319A (en) Cache synchronization method and equipment for distributed cluster file system
CN114168490A (en) Method for determining memory recovery threshold and related equipment
CN111309266A (en) Distributed storage metadata system log optimization system and method based on ceph
WO2024099448A1 (en) Memory release method and apparatus, memory recovery method and apparatus, and computer device and storage medium
CN112559476A (en) Log storage method for improving performance of target system and related equipment thereof
CN106164874B (en) Method and device for accessing data visitor directory in multi-core system
CN114896215A (en) Metadata storage method and device
KR20020016513A (en) Reclaim space reserve for a compressed memory system
CN111858393B (en) Memory page management method, memory page management device, medium and electronic equipment
CN117608846A (en) Storage resource processing method and DPU
US6282616B1 (en) Caching managing method for network and terminal for data retrieving
CN116185305A (en) Service data storage method, device, computer equipment and storage medium
CN114443578A (en) Aggregation object recombination method and device
CN104714897A (en) Cache-based list processing method on android platform
CN113553195A (en) Memory pool resource sharing method, device, equipment and readable medium
CN112882831A (en) Data processing method and device
US7269705B1 (en) Memory space management for object-based memory system
CN117251292B (en) Memory management method, system, terminal and storage medium
CN117539409B (en) Query acceleration method and device based on data cache, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination