Nothing Special   »   [go: up one dir, main page]

CN116991761A - Data processing method, device, computer equipment and storage medium - Google Patents

Data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116991761A
CN116991761A CN202310752782.3A CN202310752782A CN116991761A CN 116991761 A CN116991761 A CN 116991761A CN 202310752782 A CN202310752782 A CN 202310752782A CN 116991761 A CN116991761 A CN 116991761A
Authority
CN
China
Prior art keywords
target data
cache unit
data
cache
storage space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310752782.3A
Other languages
Chinese (zh)
Inventor
余烜
杜洁琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202310752782.3A priority Critical patent/CN116991761A/en
Publication of CN116991761A publication Critical patent/CN116991761A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present disclosure provides a data processing method, apparatus, computer device, and storage medium, where the method includes: in response to receiving target data to be cached, determining a corresponding first cache unit for the target data; the first cache unit carries the target data and defines cache attribute information of the target data; determining an actual storage position of the first cache unit in an actual storage space based on the data amount corresponding to the first cache unit, and recording a virtual storage position of the first cache unit in a virtual storage space in a memory; and storing the first cache unit to the determined actual storage position, and asynchronously storing cache attribute information of the first cache unit in a disk storage space so as to perform data management on target data in the first cache unit based on the cache attribute information.

Description

Data processing method, device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer storage, and in particular relates to a data processing method, a data processing device, computer equipment and a storage medium.
Background
For application platforms such as a video platform, a novel reading platform, a learning education platform and the like, in order to ensure the use experience of users, a resource preset mode is selected to shorten the loading time of pages under the platform, for example, data and resources are stored locally in a cache mode. However, the current selectable cache mode cannot solve the problem of poor read-write performance caused by different data sizes of cache data or performance degradation exceeding the physical memory size caused by the inability to effectively manage the cache data.
Disclosure of Invention
The embodiment of the disclosure at least provides a data processing method, a data processing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a data processing method, including: in response to receiving target data to be cached, determining a corresponding first cache unit for the target data; the first cache unit carries the target data and defines cache attribute information of the target data; determining an actual storage position of the first cache unit in an actual storage space based on the data amount corresponding to the first cache unit, and recording a virtual storage position of the first cache unit in a virtual storage space in a memory; the actual storage position is positioned in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read; and storing the first cache unit to the determined actual storage position, and asynchronously storing cache attribute information of the first cache unit in a disk storage space so as to perform data management on target data in the first cache unit based on the cache attribute information.
In an alternative embodiment, determining a corresponding first cache unit for the target data includes: searching whether a second cache unit matched with the identification information exists in the virtual storage space or not based on the identification information determined by the target data; and in response to the existence of a second cache unit matched with the identification information in the virtual storage space, erasing the historical data stored under the second cache unit, and updating the second cache unit into a first cache unit corresponding to the target data.
In an optional implementation manner, the operation types corresponding to the target data under the first cache unit include a read operation, a write operation and a delete operation; the method further comprises the steps of: responding to the operation of the target data under each first cache unit to comprise a plurality of continuous operations, and adding the plurality of operations corresponding to the target data under each first cache unit into at least one execution thread based on the ordering sequence among the plurality of operations corresponding to the target data under the same first cache unit; executing at least one operation on each of the target data based on the determined at least one execution thread; the writing operation and the deleting operation of different target data in the execution thread are asynchronously executed; the plurality of operations for the target data under the same first cache unit under the plurality of execution threads are executed based on a sort order between the plurality of operations for the target data.
In an alternative embodiment, the performing at least one operation on each of the target data includes: determining a target operation of a target operation type which is executed last under the sequencing order based on the sequencing order of the operations to be executed corresponding to the target data; the target operation type is a write operation or a delete operation; creating a memory copy for a first cache unit corresponding to the target data, and storing an operation result of the target operation instruction on the target data into the memory copy; and responding to the read operation of the target data in the first cache unit after the target operation, and taking the operation result stored in the memory copy corresponding to the first cache unit as the read result of the read operation.
In an alternative embodiment, when creating a corresponding memory copy for the target data, the method further includes: and asynchronously executing each operation based on the ordering sequence of each operation corresponding to the target data.
In an alternative embodiment, the delete operation or write operation on the target data is performed in the following manner: determining a target virtual storage position matched with the identification information of the target data; and deleting or writing the target data from the actual storage space corresponding to the target virtual storage position according to a preset mapping relation between the actual storage space and the virtual storage space.
In an alternative embodiment, in response to there being at least two consecutive operations to a read operation of the target data, the read operation of the target data is performed in the following manner: in response to judging that target data under the first cache unit is not read from the virtual storage space under the first reading operation, adding the first reading operation into an execution thread; responding to the received second reading operation, creating a corresponding memory copy for the first cache unit, and storing an operation result of the target data in the memory copy; and determining the operation result stored in the memory copy as an operation result corresponding to the reading operation.
In an optional embodiment, the data management of the target data in the first cache unit based on the cache attribute information includes: determining the data validity of the target data based on the cache attribute information of the target data; the cache attribute information comprises the storage duration of the target data and/or the storage sequence of the target data in a plurality of data stored in a virtual storage space, and the cache attribute information corresponding to the target data is changed based on the operation of a first cache unit corresponding to the target data; and erasing the target data from the first cache unit in response to the data validity of the target data indicating that the target data is invalid in the process of operating the target data.
In a second aspect, an embodiment of the present disclosure further provides a data processing apparatus, including: the determining module is used for determining a corresponding first cache unit for target data in response to receiving the target data to be cached; the first cache unit carries the target data and defines cache attribute information of the target data; the recording module is used for determining the actual storage position of the first cache unit in the actual storage space based on the data quantity corresponding to the first cache unit and recording the virtual storage position of the first cache unit in the virtual storage space in the memory; the actual storage position is positioned in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read; and the storage module is used for storing the first cache unit to the determined actual storage position, and asynchronously storing the cache attribute information of the first cache unit in a disk storage space so as to carry out data management on target data in the first cache unit based on the cache attribute information.
In an alternative embodiment, the determining module is configured to, when determining the corresponding first cache unit for the target data: searching whether a second cache unit matched with the identification information exists in the virtual storage space or not based on the identification information determined by the target data; and in response to the existence of a second cache unit matched with the identification information in the virtual storage space, erasing the historical data stored under the second cache unit, and updating the second cache unit into a first cache unit corresponding to the target data.
In an optional implementation manner, the operation types corresponding to the target data under the first cache unit include a read operation, a write operation and a delete operation; the apparatus further comprises a processing module for: responding to the operation of the target data under each first cache unit to comprise a plurality of continuous operations, and adding the plurality of operations corresponding to the target data under each first cache unit into at least one execution thread based on the ordering sequence among the plurality of operations corresponding to the target data under the same first cache unit; executing at least one operation on each of the target data based on the determined at least one execution thread; the writing operation and the deleting operation of different target data in the execution thread are asynchronously executed; the plurality of operations for the target data under the same first cache unit under the plurality of execution threads are executed based on a sort order between the plurality of operations for the target data.
In an alternative embodiment, the processing module, when performing at least one operation on each of the target data, is configured to: determining a target operation of a target operation type which is executed last under the sequencing order based on the sequencing order of the operations to be executed corresponding to the target data; the target operation type is a write operation or a delete operation; creating a memory copy for a first cache unit corresponding to the target data, and storing an operation result of the target operation instruction on the target data into the memory copy; and responding to the read operation of the target data in the first cache unit after the target operation, and taking the operation result stored in the memory copy corresponding to the first cache unit as the read result of the read operation.
In an alternative embodiment, the processing module, when creating a corresponding memory copy for the target data, is further configured to: and asynchronously executing each operation based on the ordering sequence of each operation corresponding to the target data.
In an alternative embodiment, the delete operation or write operation on the target data is performed in the following manner: determining a target virtual storage position matched with the identification information of the target data; and deleting or writing the target data from the actual storage space corresponding to the target virtual storage position according to a preset mapping relation between the actual storage space and the virtual storage space.
In an alternative embodiment, in response to there being at least two consecutive operations for the read operation of the target data corresponding to the first cache unit, the read operation of the target data corresponding to the first cache unit is performed in the following manner: in response to judging that target data under the first cache unit is not read from the virtual storage space under the first reading operation, adding the first reading operation into an execution thread; responding to the received second reading operation, creating a corresponding memory copy for the first cache unit, and storing an operation result of the target data in the memory copy; and determining the operation result stored in the memory copy as an operation result corresponding to the reading operation.
In an optional implementation manner, when the storage module performs data management on the target data in the first cache unit based on the cache attribute information, the storage module is configured to: determining the data validity of the target data based on the cache attribute information of the target data; the cache attribute information comprises the storage duration of the target data and/or the storage sequence of the target data in a plurality of data stored in a virtual storage space, and the cache attribute information corresponding to the target data is changed based on the operation of a first cache unit corresponding to the target data; and erasing the target data from the first cache unit in response to the data validity of the target data indicating that the target data is invalid in the process of operating the target data.
In a third aspect, an optional implementation manner of the disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, where the machine-readable instructions, when executed by the processor, perform the steps in the first aspect, or any possible implementation manner of the first aspect, when executed by the processor.
In a fourth aspect, an alternative implementation of the present disclosure further provides a computer readable storage medium having stored thereon a computer program which when executed performs the steps of the first aspect, or any of the possible implementation manners of the first aspect.
According to the data processing method, the device, the computer equipment and the storage medium, for the received target data to be cached, a first cache unit for data storage can be determined for the target data, and under the condition that the data quantity corresponding to the first cache unit is determined, according to the read-write performance corresponding to cache data with different data sizes in different storage spaces, a proper actual storage position is selected for the first cache unit in a memory or a magnetic disk, so that the problem of poor read-write performance caused by different data sizes of the cache data is solved. In addition, the buffer attribute information of the first buffer unit can be asynchronously stored in the disk, and the target data can be effectively data-managed, so that the problem of performance degradation when the physical memory size is exceeded is prevented.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a data processing method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart provided by embodiments of the present disclosure when performing a write operation;
FIG. 3 illustrates a flow chart provided by an embodiment of the present disclosure when performing a read operation;
FIG. 4 illustrates a flow chart provided by an embodiment of the present disclosure when a delete operation is performed;
FIG. 5 is a schematic diagram of a method for selecting concurrent execution operations according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of performing a write operation and a delete operation using a memory copy according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a read operation performed using a memory copy according to an embodiment of the disclosure;
FIG. 8 illustrates a schematic diagram of reducing write operations or delete operations by omitting operations provided by embodiments of the present disclosure;
FIG. 9 shows a schematic diagram of a data processing apparatus provided by an embodiment of the present disclosure;
fig. 10 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
According to research, in order to reduce the problem of time consuming loading caused by frequent loading of data under an application platform with data pushing, a cache mode can be selected to store the data and the resources locally. The currently selectable cache manner is, for example, a file storage manner (storing cache data to a disk), a memory map manner (building a mapping relationship between a virtual storage space and an actual storage space in which the cache data is actually stored), a database storage manner (storing the cache data to the disk in a database manner), and the like. However, these cache methods cannot solve the problem of poor read-write performance caused by different data sizes of cache data or performance degradation beyond the physical memory size caused by the inability to effectively manage the cache data.
Based on the above-mentioned study, the present disclosure provides a data processing method, for a received target data to be cached, a first cache unit for data storage may be determined for the target data, and under the condition of determining a data amount corresponding to the first cache unit, according to the read-write performance corresponding to cache data with different data sizes in different storage spaces, a suitable actual storage position is selected for the first cache unit in a memory or a disk, so as to solve the problem of poor read-write performance caused by different data sizes of the cache data. In addition, the buffer attribute information of the first buffer unit can be asynchronously stored in the disk, and the target data can be effectively data-managed, so that the problem of performance degradation when the physical memory size is exceeded is prevented.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a data processing method disclosed in an embodiment of the present disclosure, where an execution body of the data processing method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, where the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the data processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
The data processing method provided by the embodiment of the present disclosure is described below. The data processing method provided by the embodiment of the disclosure can be particularly applied to different application platforms, such as the video platform, the novel reading platform, the learning education platform, the shopping platform and the like. Under these application platforms, there is data suitable for being stored in a cache manner according to the display requirement of the application platform or the information acquisition requirement of the user, which is specifically referred to as target data to be cached in the embodiment of the present disclosure. In the embodiment of the disclosure, the data processing is specifically performed on the target data described herein, so that information is provided to the user more timely in a data caching manner, and thus better user experience is brought.
Referring to fig. 1, a flowchart of a data processing method according to an embodiment of the disclosure is shown, where the method includes steps S101 to S103, where:
s101: in response to receiving target data to be cached, determining a corresponding first cache unit for the target data; the first cache unit carries the target data and defines cache attribute information of the target data;
s102: determining an actual storage position of the first cache unit in an actual storage space based on the data amount corresponding to the first cache unit, and recording a virtual storage position of the first cache unit in a virtual storage space in a memory; the actual storage position is located in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read;
S103: and storing the first cache unit to the determined actual storage position, and asynchronously storing cache attribute information of the first cache unit in a disk storage space so as to perform data management on target data in the first cache unit based on the cache attribute information.
For S101, first, target data to be cached will be described. Here, the target data to be cached may specifically include data that enables the user to access in time, such as video category information browsable by the user under the video platform, learning materials such as word books under the learning education platform, or merchandise information related to coupon retrieval under the shopping platform. In addition, the method can specifically further comprise personal information of the user, such as a displayed user name, a displayed user head portrait and the like, so that the situation that the user needs to frequently interact from a server side of the platform to acquire data during each display is avoided.
While these target data are cached, a first cache unit for storing the target data may be specifically determined. The buffer unit is virtually available, is used for bearing target data to independently store the target data, and can be used for defining buffer attribute information of the target data.
Here, the cache attribute information may be specifically used to record related data at the time of caching the target data, such as a cache creation time of cache creation, a maximum life cycle of the target data, a cache location of the target data, what type of data the stored target data is, and a data validity of the target data, etc. at the time of caching the target data.
The cache attribute information may be specifically used to manage the target data, for example, may effectively record the current storage location of the target data for searching, or determine whether the currently stored target data has exceeded a storage period according to the cache creation time, the maximum declaration period, and further determine whether the data is valid, and select whether to perform management operations such as deletion.
For the above S102, according to the description of the above steps, when storing the target data, the first buffer unit carrying the target data may be specifically stored. When storing the first cache unit, consider the following two issues: firstly, in order to reduce frequent read-write operations on the disk, a mode of increasing a virtual storage space (or called a memory cache layer) can be selected, so that the cache hit rate is increased to reduce the frequent operations on the disk; secondly, when the actual caching technology is selected, the read-write performance of the data with smaller data size is poorer under the mode of storing based on the file, and the read-write performance of the memory mapping mode is poorer for the data with larger data size, so that the selection of the caching mode is determined according to the data quantity under the first caching unit, namely the first caching unit is selected to be stored in a disk with larger data quantity or a memory with smaller data quantity.
Therefore, for a specific storage structure, two parts of virtual storage space and actual storage space are selected. The actual storage space specifically comprises a disk storage space and a memory storage space which are respectively provided by a disk and a memory. Virtual storage space may be provided by disk and may be used as a function of memory. Therefore, the data can be quickly read and written through the virtual storage space, so that the frequent read-write operation on the disk is reduced.
And for the influence of different data sizes on the storage position, the first cache unit can be judged to be actually stored in the memory storage space or the disk storage space according to the data size corresponding to the first cache unit. Here, when determining the corresponding storage space by using the data amount, generally, the amount of data that can be carried in the disk storage space will be larger than the amount of data that can be carried in the memory storage space, in order to quantify that the data amount corresponding to the first buffer unit belongs to a large data amount or a small data amount, so as to determine the actual storage position, a data amount threshold, such as 10KB, 20KB, etc., may be set, so that the size of the data amount corresponding to the first buffer unit can be compared with the data amount threshold set herein, so as to determine the actual storage position. Here, the set data amount threshold may be specifically determined according to a device parameter (such as an available capacity of a memory or a disk), a selected operating system, or based on experience, which will not be described herein.
Therefore, after receiving the target data to be cached and determining the corresponding first cache unit for the target data, the target data can be compared with the determined data amount threshold, for example, compared with 10KB according to the data amount of the first cache unit. If the data size of the first cache unit is larger than 10KB, the identification information of the first cache unit is recorded in the virtual cache unit, the first cache unit is written into the virtual storage space, and then the first cache unit is actually written into the disk storage space according to the mapping relation between the virtual storage space and the disk storage space. Similarly, if the number of the first cache units is smaller than 10KB, the identification information of the first cache units is directly recorded in the virtual cache units, and the identification information can be directly written into the memory storage space and recorded in the virtual storage space for the virtual storage position.
Here, the identification information is specifically used for labeling the name of the first cache unit, so as to uniquely determine the first cache unit, and the first cache unit can be specifically represented by a set cache key. Therefore, when determining the corresponding first cache unit for the target data, the identification information corresponding to the first cache unit is determined according to the target data, and then whether the cache unit marked with the same identification information exists in the virtual storage space is searched, that is, the second cache unit described in the embodiment of the present disclosure. If there is a second cache unit matching with the identification information, the first cache unit stored currently is specifically selected to replace the original second cache unit, and the specific mode includes deleting the history data stored under the second cache unit, and deleting the data associated with the actual storage space, so as to update the original second cache unit to the first cache unit.
That is, in a specific implementation, when determining a corresponding first cache unit for target data, whether a second cache unit matched with the identification information exists in the virtual storage space or not may be searched based on the identification information determined by the target data; and in response to the existence of a second cache unit matched with the identification information in the virtual storage space, erasing the historical data stored under the second cache unit, and updating the second cache unit into a first cache unit corresponding to the target data.
Therefore, for the stored first cache unit, the target data stored in the first cache unit can be obtained by searching in the virtual storage space through the identification information, so that the condition that a large amount of time is consumed for searching data under the condition of a disk storage space with a large capacity can be avoided, and the reading and writing speed is improved.
For S103, after the first cache unit is stored in the corresponding actual storage location, the cache attribute information of the first cache unit may also be asynchronously stored. The cache attribute information is specifically used for managing data of the target data in the first cache unit.
Specifically, the stored target data has data validity, where the data validity can be judged by the storage duration or the like. For example, if a certain target data has been stored for more than two days and is not called, the target data may be considered to have a low possibility of being called again, the data validity is poor, and the corresponding storage space may be released, so that the new data to be cached may be stored continuously.
Therefore, in the implementation, a database stored in the disk may also be introduced for data management of the target data in the first cache unit after storage. Specifically, the data validity of the target data may be determined based on the cache attribute information of the target data; the cache attribute information comprises the storage duration of the target data and/or the storage sequence of the target data in a plurality of data stored in a virtual storage space, and the cache attribute information corresponding to the target data is changed based on the operation of a first cache unit corresponding to the target data; and erasing the target data from the first cache unit in response to the data validity of the target data indicating that the target data is invalid in the process of operating the target data.
The following is a detailed description. When data management is performed through the database, the data management is realized by actually following a elimination algorithm of the database. In an embodiment of the disclosure, the elimination algorithm may specifically include removing data in the virtual storage space to ensure that only a limited amount of data is stored in the virtual storage space; removing data in the virtual storage space to ensure that the size of the storage space occupied by the data stored in the virtual storage space does not exceed the storage space threshold of the virtual storage space; data that fails in the virtual storage space is removed.
Based on the above-listed elimination algorithm, it is possible to determine cache attribute information of the target data, and to determine whether it is data that should be "eliminated" under the elimination algorithm. For example, with the first elimination algorithm described above, if it is determined that only n data can be stored in the virtual storage space, and it is determined that the target data is arranged in the first among the n currently stored data according to the storage order of the stored target data among the plurality of data, after new data is stored in the virtual storage space, the target data will show a storage order earlier due to the earlier storage time, and under the selected elimination algorithm, only n data are allowed to be stored, and the data is considered to be invalid and removed from the virtual storage space to reserve a storage location for the new data.
The second elimination algorithm is similar to the first elimination algorithm in terms of the principle, and will not be described here again. For the third elimination algorithm, specifically, the time of creating the buffer for the first buffer unit for storing the target data described in the above embodiment may be used to determine the storage duration of the target data, and then determine whether the first buffer unit is "expired", that is, whether the target data is invalid, according to the maximum life cycle of the buffer, and erase the invalid target data from the first buffer unit.
Therefore, the effective management of the data in the virtual storage space can be continuously realized through the database, the situation that the virtual storage space cannot continuously store the data which are more recent in time due to the fact that the number of the data which are continuously stored in the virtual storage space and correspond to the first buffer memory units is excessive or the data quantity is excessive is prevented, and the occupation of the limited virtual storage space by the data which are longer in storage time and less in reading is reduced.
Therefore, in addition to the mode based on file storage and the memory mapping mode, the management mode of the database is additionally selected to realize data statistics and elimination algorithm, and the part with better processing performance can be exerted, wherein the part with higher data reading and writing performance on the data with higher data quantity under the mode based on file storage, the data reading and writing performance on the data with lower data quantity under the memory mapping mode are higher, and the normal use of the virtual storage space can be ensured by realizing the data statistics and elimination algorithm under the management mode of the database.
Next, a specific operation type corresponding to the target data stored in the first cache unit will be described with respect to the above-described cache method and data management method. In a specific implementation, for the target data in the first cache unit, the corresponding operation types specifically include a write operation, a read operation, and a delete operation.
Taking a delete operation or a write operation as an example, in the implementation, the delete operation or the write operation for the target data is performed in the following manner: determining a target virtual storage position matched with the identification information of the target data; and deleting or writing the target data from the actual storage space corresponding to the target virtual storage position according to a preset mapping relation between the actual storage space and the virtual storage space.
For convenience of explanation, the specific steps involved in the three different operation types on the target data are described as follows:
(1) Write operation:
referring to fig. 2, a flowchart of a write operation according to an embodiment of the disclosure specifically includes:
s201: determining whether a second cache unit matched with the identification information of the target data exists in the virtual storage space; if yes, continuing to execute S202, if not, jumping to execute S203;
s202: erasing the history data stored in the second buffer unit and the history data stored in the actual storage space, and continuing to execute S203;
s203: determining a corresponding first cache unit for target data;
s204: judging whether the data volume corresponding to the first cache unit is larger than a preset data volume threshold value or not; if yes, step S205 is sequentially executed; if not, jumping to execute S206;
S205: storing a first cache unit carrying target data into a disk storage space through a file read-write queue; continuing to execute S207;
s206: storing a first cache unit carrying target data into a memory storage space; continuing to execute S207;
s207: updating a cache location field of the first cache unit;
s208: serializing a first cache unit, and recording a virtual storage position of the first cache unit in a virtual storage space;
s209: and updating the cache attribute information of the data in the database through an asynchronous thread, and carrying out data management.
The specific process of the above steps may be described in detail in the above embodiments, and the detailed description is not repeated here.
(2) Reading operation:
referring to fig. 3, a flowchart of a read operation according to an embodiment of the disclosure specifically includes:
s301: determining whether a first cache unit matched with the identification information of the target data exists in the virtual storage space; if yes, continue to execute S302, if no, jump to execute S309;
s302: judging whether the target data under the first cache unit is valid or not by utilizing the cache attribute information; if yes, continue to execute S303, and asynchronously execute S304;
S303: determining whether the storage position is a disk storage space according to the cache position field of the first cache unit; if not, jumping to execute S305, if yes, jumping to execute S307;
s304: updating the cache access time in the database through an asynchronous thread;
here, the stored cache access time may be specifically used to determine whether the target data in the stored first cache unit is frequently read; if the target data is frequently read, the target data can be continuously cached during management so as to be conveniently called for multiple times; however, if the target data is not frequently read, the corresponding cache space can be selected and released according to the elimination algorithm.
S305: determining whether a data field in the memory storage space is not null; if yes, executing S306; if not, then S309 is performed;
here, the null value (null) may specifically include deleting the data in the memory cache unit, for example, according to the elimination algorithm, or may also include performing the deletion operation before the read operation.
S306: reading target data stored in a memory storage space under a first cache unit; continuing to execute S308;
S307: target data under a first cache unit stored in a disk storage space through a file read-write queue;
s308: returning data;
here, since the target data is specifically stored in the actual storage space, the data that can be returned, that is, the data indicated by the data field.
S309: and returning null data.
Here, the blank data is, for example, "null" described above, indicating that no data or erased data is stored.
(3) Deletion operation:
referring to fig. 4, a flowchart of a deletion operation according to an embodiment of the present disclosure specifically includes:
s401: determining whether a first cache unit matched with the identification information of the target data exists in the virtual storage space; if yes, continuing to execute S402, if not, ending;
s402: cleaning a first cache unit from the virtual storage space; asynchronously executing S403 and continuing to execute S404;
s403: clearing the cache attribute information stored in the database through an asynchronous thread;
s404: determining whether target data under the first cache unit is stored in a disk storage space; if yes, executing S405; if not, ending;
s405: and deleting the disk file through the file read-write queue.
Under the read operation, write operation, and delete operation described in the above embodiments, among actual references, the following may also occur in particular: a first cache unit corresponding to one target data is provided with a plurality of continuous operations; and, at the same time, there may be operations on multiple data corresponding cache locations.
For the above listed situations, in another embodiment of the present disclosure, for the target data in each first cache unit, a concurrent read/write manner is further provided to improve the data throughput, so as to improve the efficiency during operation, specifically:
responding to the operation of the target data under each first cache unit to comprise a plurality of continuous operations, and adding the plurality of operations corresponding to the target data under each first cache unit into at least one execution thread based on the ordering sequence among the plurality of operations corresponding to the target data under the same first cache unit; executing at least one operation on each of the target data based on the determined at least one execution thread; the writing operation and the deleting operation of different target data in the execution thread are asynchronously executed; the plurality of operations for the target data under the same first cache unit under the plurality of execution threads are executed based on a sort order between the plurality of operations for the target data.
In a specific operation, for a plurality of different target data, when the target data has a plurality of corresponding continuous operations, for example, for each target data, according to the arrival sequence of the corresponding plurality of operations, the ordering sequence of the plurality of operations is determined, so as to obtain an operation queue of the target data under each first cache unit.
When the concurrent execution operation is selected, the following principle is specifically followed: the writing operation and deleting operation of the target data under different first cache units can be executed asynchronously, and only the reading operation can be executed synchronously; and for a plurality of operations for the target data under the same first cache unit, executing according to the sorting order thereof. Under the following principle, a plurality of operation queues arranged under each target data can be screened and added into a plurality of threads which are executed concurrently, and a plurality of different operations under each target data can be executed in the plurality of threads. Wherein, a plurality of threads can be executed concurrently, and the operations under each thread are performed sequentially according to the ordering order in the threads. Then, the operation after the completion of the function is executed in the thread, and the corresponding operation is removed in the operation queue.
For example, referring to fig. 5, a schematic diagram of selecting and concurrently executing operations according to an embodiment of the present disclosure is shown, where a first cache unit corresponding to each target data is denoted by key1, key2, etc., and in an operation queue corresponding to each target data, a plurality of operation sequences are specifically arranged according to an order in which the operations are received. For simplicity, a read operation is indicated on the target data by a box labeled "read" in the operation queue, a write operation is indicated on the target data by a box labeled "write" and a delete operation is indicated on the target data by a box labeled "delete" and the order is top-to-bottom.
Correspondingly, for a plurality of different target data indicated in the operation queue, tasks which need to be executed can be screened and determined accordingly and added into a plurality of threads which are executed concurrently, wherein the plurality of threads are specifically represented as thread 1, thread 2 and the like. With reference to the description of the above embodiment, each operation in the operation queue may be added to a plurality of operation queues that are executed concurrently, such as sequentially executing a read operation under the key1 operation queue and a read operation under the key4 operation queue under the thread 1 in the figure, and concurrently executing a delete operation under the key2 operation queue in the thread 2, and concurrently executing a read operation under the key3 operation queue in the thread 3. For ease of representation, the threads in the figures are labeled in the form of boxes as "read key1", "read key4", "delete key2", and "read key3". Then, for operations that are completed in the thread, they are removed from the operation queue accordingly.
Based on the above description, in order to ensure the read-write security, the corresponding read-write operations of the target data in the same first cache unit are performed in series, and when the operations are performed by multiple threads, the read-write operations of the target data in the same first cache unit may have multiple operations, which may take more time in the serial execution mode. Therefore, in the embodiment of the present disclosure, a manner of adding a memory copy is provided specifically to improve the read-write efficiency of the target data under the same first cache unit.
Specifically, when an operation of the target data corresponding to the first cache unit is received, a memory copy is created for the first cache unit, when a write operation and a delete operation of the first cache unit are received, an operation result of the target data in the first cache unit indicated by the operation is stored in the memory copy, and then the operation result stored in the memory copy can be directly read and fed back. And for a plurality of continuous operations, the operation can be asynchronously executed to improve the efficiency.
The above steps are described below. Firstly, for the operation aiming at the target data in each first cache unit, the writing operation and the deleting operation can be obtained specifically before the corresponding operation is executed, so that after the writing operation and the deleting operation are received, the operation result corresponding to the writing operation or the deleting operation can be stored in the memory copy first, and the operation result stored in the memory copy can be directly returned without finishing the execution of the writing operation and the deleting operation completely, so that the time of the subsequent reading operation is reduced.
Here, since the corresponding operation is continuously acquired for the target data in any first cache unit, in the above process, after receiving the new operation successively, the operation result in the corresponding memory copy will also be changed accordingly, for example, the new data result is written in by the overlay mode, or the deleting operation is performed. I.e. the above process is dynamically changing and not only at a specific moment.
When a corresponding memory copy is created for the first cache unit, the corresponding initial operation result may be set to "null". For the received write operation and delete operation, the entity will update the execution result corresponding to the target data to the memory copy, and then asynchronously execute the operations based on the ordering order of the operations corresponding to the target data in the first cache unit. This asynchronous implementation may allow the data storage to be performed simultaneously with the steps performed by the operations, which may be more efficient.
The following describes the procedure corresponding to the write operation and the delete operation after the memory copy is created. For received write operations and delete operations, the execution results may be updated to the memory copy first, and then each operation may be asynchronously added to the end of the operation queue to be scheduled to be executed in different threads according to the ordering order in the operation queue. Specific steps herein are detailed in the description of the above embodiments.
Here, in order to facilitate distinguishing between the data result brought about by the deletion operation and the state before the write operation is not performed, in the embodiment of the present disclosure, the data result is set to "0" under the deletion operation, and the data result is set to "null" before the write operation is not performed. After the writing operation and the deleting operation are finished, except the deletion of the corresponding operation in the operation queue, the data in the memory copy can be cleared, so that the memory copy is prevented from occupying excessive storage space.
The following description will be made in connection with specific operations. First, for a write operation and a delete operation, referring to fig. 6, a schematic diagram is provided for performing the write operation and the delete operation by using the memory copy according to an embodiment of the disclosure, where:
s601: after receiving the writing operation and deleting operation of the first cache unit, updating an operation result of the operation instruction into a corresponding memory copy of the first cache unit;
s602: asynchronously adding the operation to the tail of the operation queue to wait for execution;
s603: any operation execution of the sequential execution is ended, and the operation is removed from the operation queue;
s604: judging whether the operation queue is empty or not; if yes, sequentially executing S605; if not, then; jump execution S606;
S605: setting the data result in the memory copy as null;
here, in the process of implementing cache optimization by using the memory copy pair, each time an operation of the operation queue is executed, the operation is deleted from the operation queue. In addition, whether other unexecuted operations exist in the current queue after the deleting operation is detected continuously, if no operation which needs to be executed continuously exists, the data result in the memory copy is set as null, and excessive data is prevented from occupying the memory.
S606: the next operation in the queue is continued to be executed, and then step S604 is returned.
For the reading operation, the specific operation steps are as follows: firstly inquiring whether an operation result exists in the memory copy, and if the data result obtained by inquiring is null, adding the operation into an operation queue to wait for execution; if the data result in the memory copy is not "null" and has actual data that is not "0", the actual data may be returned without reading from the actual storage space. And if the data result is "0", it means that the data has been deleted, and "null" may be returned directly as the operation result.
In a specific implementation, the following manner may be adopted: in response to judging that target data under the first cache unit is not read from the virtual storage space under the first reading operation, adding the first reading operation into an execution thread; responding to the received second reading operation, creating a corresponding memory copy for the first cache unit, and storing an operation result of the target data in the memory copy; and determining the operation result stored in the memory copy as an operation result corresponding to the reading operation.
Specifically, referring to fig. 7, a schematic diagram of a read operation performed by using a memory copy according to an embodiment of the disclosure is shown, where:
s701: judging whether the data in the memory copy is null or not; if yes, executing S702; if not, jumping to execute S709;
s702: asynchronously adding a read operation into an operation queue for waiting to be executed;
s703: judging whether the data stored in the memory copy is null or not; if yes, continuing to execute S704, if not, jumping to execute S705;
here, the same steps as S701 are repeated, because: if two consecutive read operations are consecutively performed on the first cache unit, the operation queue is added when the virtual memory is determined to be null in S701 after the first read operation arrives. When the second read operation is received, a copy of the memory is stored because it can be determined that there is a second read operation after the first read operation. Therefore, when the second reading operation is executed, the result data is stored in the memory copy, and the data does not need to be read from the disk storage space, so that one reading process of the disk can be saved. Therefore, here again, it is determined whether there is available result data in the memory copy, and if so, the read operation may be deleted and the result data returned, see in particular the description below.
S704: deleting the read operation from the operation queue; continuing to execute S706;
s705: reading result data from the memory copy; continuing to execute S704;
s706: judging whether the operation queue is empty or not; if yes, executing S707, if not, jumping to execute S708;
s707: setting the result data in the memory copy as null; continuing to execute S711;
s708: updating the result data in the memory copy into read result data; continuing to execute S711;
s709: judging whether the result data in the memory copy is actual data which is not 0; if yes, continue to execute S711; if not, then executing S710;
s710: determining that the result data is null; continuing to execute S711;
s711: and returning the determined result data.
On the basis of the above-described embodiments, if a plurality of consecutive write operations or delete operations are continuously added to the corresponding operation queue for the target data of a certain first cache unit, these operations may be actually equivalent to performing only the last operation in the operation queue, and the intermediate operation procedure may be omitted.
Thus, in the embodiment of the present disclosure, the simplification process may be performed specifically in the following manner:
determining a target operation of a target operation type which is executed last under the sequencing order based on the sequencing order of the operations to be executed corresponding to the target data; the target operation type is a write operation or a delete operation; creating a memory copy for a first cache unit corresponding to the target data, and storing an operation result of the target operation instruction on the target data into the memory copy; and responding to the read operation of the target data in the first cache unit after the target operation, and taking the operation result stored in the memory copy corresponding to the first cache unit as the read result of the read operation.
For example, referring to fig. 8, a schematic diagram is provided for reducing write operations or delete operations by omitting operations according to an embodiment of the present disclosure. Before processing in the manner described above, a plurality of operation queues corresponding to different target data are shown, for example, in fig. 8 under "pre-processing" condition; after the step of simplifying the above, the operation queue is changed into a plurality of operation queues corresponding to the label "processed".
In the case of simplifying the processing, the first operation under each first buffer cell, which is selected by the dashed frame in fig. 8, is considered to be an ongoing operation only for the operation waiting for the processing, and therefore is not deleted in the case of simplifying the processing.
And for operations outside the dashed box, all are considered to be waiting operations. For these operations, where there are consecutive write operations and/or delete operations, only the last one may be reserved. For example, for the plurality of operations under the label "key2", the write operation and the delete operation, respectively, are reserved because there is only the delete operation in the operation waiting to be performed, and not the plurality of continuous write operations and/or delete operations. And for the operations under the mark of "key3", the operations are respectively a read operation, a write operation and a delete operation, then under the write operation, the write operation and the delete operation waiting to be performed, the operations are three continuous write operations and delete operations, so that only the last delete operation is reserved, and the operations under the mark of "key3" after modification, that is, the ongoing read operation, and the delete operation waiting to be performed are continued.
In addition, for the operation queue before processing, after the last operation type is the target operation of the writing operation or the deleting operation, there is a reading operation, and the operation result directly stored in the memory copy under the target operation can be specifically selected as the reading result of the reading operation. Further, if the target operation is specifically a delete operation, for example, in the operation queue labeled "key4" before processing in fig. 8, after the last delete operation, two read operations are further included, then, since the result data in the memory copy after the delete operation is more "0", it may be determined that the actual value cannot be returned in the read operation, that is, only the "null" feedback result can be obtained, so that it may be selected not to continue waiting for execution in the queue. Thus, under the operation list of post-processing label "key4", two read operations in the arrangement are also performed after the deletion operation without remaining before processing.
Thus, according to the description of the above embodiment, for different target data, the read operation of the actual cache unit can be reduced by the concurrent execution mode and the mode of using the memory copy in a matching manner; in addition, a mode of omitting the operation is used for simplifying the corresponding operation waiting to be executed in the operation queue, so that the performance of the operation is effectively improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a data processing device corresponding to the data processing method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the data processing method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 9, a schematic diagram of a data processing apparatus according to an embodiment of the disclosure is provided, where the apparatus includes: a determining module 91, a recording module 92, a storing module 93; wherein,,
a determining module 91, configured to determine, in response to receiving target data to be cached, a corresponding first cache unit for the target data; the first cache unit carries the target data and defines cache attribute information of the target data;
a recording module 92, configured to determine an actual storage position of the first cache unit in an actual storage space based on a data amount corresponding to the first cache unit, and record a virtual storage position of the first cache unit in a virtual storage space in a memory; the actual storage position is positioned in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read;
And the storage module 93 is configured to store the first cache unit to the determined actual storage location, and asynchronously store cache attribute information of the first cache unit in a disk storage space, so as to perform data management on target data in the first cache unit based on the cache attribute information.
In an alternative embodiment, the determining module 91 is configured to, when determining the corresponding first cache unit for the target data: searching whether a second cache unit matched with the identification information exists in the virtual storage space or not based on the identification information determined by the target data; and in response to the existence of a second cache unit matched with the identification information in the virtual storage space, erasing the historical data stored under the second cache unit, and updating the second cache unit into a first cache unit corresponding to the target data.
In an optional implementation manner, the operation types corresponding to the target data under the first cache unit include a read operation, a write operation and a delete operation; the apparatus further comprises a processing module 94 for: responding to the operation of the target data under each first cache unit to comprise a plurality of continuous operations, and adding the plurality of operations corresponding to the target data under each first cache unit into at least one execution thread based on the ordering sequence among the plurality of operations corresponding to the target data under the same first cache unit; executing at least one operation on each of the target data based on the determined at least one execution thread; the writing operation and the deleting operation of different target data in the execution thread are asynchronously executed; the plurality of operations for the target data under the same first cache unit under the plurality of execution threads are executed based on a sort order between the plurality of operations for the target data.
In an alternative embodiment, the processing module 94, when performing at least one operation on each of the target data, is configured to: determining a target operation of a target operation type which is executed last under the sequencing order based on the sequencing order of the operations to be executed corresponding to the target data; the target operation type is a write operation or a delete operation; creating a memory copy for a first cache unit corresponding to the target data, and storing an operation result of the target operation instruction on the target data into the memory copy; and responding to the read operation of the target data in the first cache unit after the target operation, and taking the operation result stored in the memory copy corresponding to the first cache unit as the read result of the read operation.
In an alternative embodiment, the processing module 94 is further configured to, when creating a corresponding memory copy for the target data: and asynchronously executing each operation based on the ordering sequence of each operation corresponding to the target data.
In an alternative embodiment, the delete operation or write operation on the target data is performed in the following manner: determining a target virtual storage position matched with the identification information of the target data; and deleting or writing the target data from the actual storage space corresponding to the target virtual storage position according to a preset mapping relation between the actual storage space and the virtual storage space.
In an alternative embodiment, in response to there being at least two consecutive operations for the read operation of the target data corresponding to the first cache unit, the read operation of the target data corresponding to the first cache unit is performed in the following manner: in response to judging that target data under the first cache unit is not read from the virtual storage space under the first reading operation, adding the first reading operation into an execution thread; responding to the received second reading operation, creating a corresponding memory copy for the first cache unit, and storing an operation result of the target data in the memory copy; and determining the operation result stored in the memory copy as an operation result corresponding to the reading operation.
In an alternative embodiment, the storage module 93 is configured to, when performing data management on the target data in the first cache unit based on the cache attribute information: determining the data validity of the target data based on the cache attribute information of the target data; the cache attribute information comprises the storage duration of the target data and/or the storage sequence of the target data in a plurality of data stored in a virtual storage space, and the cache attribute information corresponding to the target data is changed based on the operation of a first cache unit corresponding to the target data; and erasing the target data from the first cache unit in response to the data validity of the target data indicating that the target data is invalid in the process of operating the target data.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure further provides a computer device, as shown in fig. 10, which is a schematic structural diagram of the computer device provided by the embodiment of the disclosure, including:
a processor 10 and a memory 20; the memory 20 stores machine readable instructions executable by the processor 10, the processor 10 being configured to execute the machine readable instructions stored in the memory 20, the machine readable instructions when executed by the processor 10, the processor 10 performing the steps of:
in response to receiving target data to be cached, determining a corresponding first cache unit for the target data; the first cache unit carries the target data and defines cache attribute information of the target data; determining an actual storage position of the first cache unit in an actual storage space based on the data amount corresponding to the first cache unit, and recording a virtual storage position of the first cache unit in a virtual storage space in a memory; the actual storage position is positioned in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read; and storing the first cache unit to the determined actual storage position, and asynchronously storing cache attribute information of the first cache unit in a disk storage space so as to perform data management on target data in the first cache unit based on the cache attribute information.
The memory 20 includes a memory 210 and an external memory 220; the memory 210 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 via the memory 210.
The specific execution process of the above instructions may refer to the steps of the data processing method described in the embodiments of the present disclosure, which is not described herein.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the data processing method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, where instructions included in the program code may be used to perform steps of a data processing method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method of data processing, comprising:
in response to receiving target data to be cached, determining a corresponding first cache unit for the target data; the first cache unit carries the target data and defines cache attribute information of the target data;
determining an actual storage position of the first cache unit in an actual storage space based on the data amount corresponding to the first cache unit, and recording a virtual storage position of the first cache unit in a virtual storage space in a memory; the actual storage position is positioned in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read;
and storing the first cache unit to the determined actual storage position, and asynchronously storing cache attribute information of the first cache unit in a disk storage space so as to perform data management on target data in the first cache unit based on the cache attribute information.
2. The method of claim 1, wherein determining a corresponding first cache location for the target data comprises:
searching whether a second cache unit matched with the identification information exists in the virtual storage space or not based on the identification information determined by the target data;
and in response to the existence of a second cache unit matched with the identification information in the virtual storage space, erasing the historical data stored under the second cache unit, and updating the second cache unit into a first cache unit corresponding to the target data.
3. The method according to claim 1 or 2, wherein the operation types corresponding to the target data under the first cache unit include a read operation, a write operation, and a delete operation; the method further comprises the steps of:
responding to the operation of the target data under each first cache unit to comprise a plurality of continuous operations, and adding the plurality of operations corresponding to the target data under each first cache unit into at least one execution thread based on the ordering sequence among the plurality of operations corresponding to the target data under the same first cache unit;
Executing at least one operation on each of the target data based on the determined at least one execution thread; the writing operation and the deleting operation of different target data in the execution thread are asynchronously executed; the plurality of operations for the target data under the same first cache unit under the plurality of execution threads are executed based on a sort order between the plurality of operations for the target data.
4. The method of claim 3, wherein said performing at least one operation on each of said target data comprises:
determining a target operation of a target operation type which is executed last under the sequencing order based on the sequencing order of the operations to be executed corresponding to the target data; the target operation type is a write operation or a delete operation;
creating a memory copy for a first cache unit corresponding to the target data, and storing an operation result of the target operation instruction on the target data into the memory copy;
and responding to the read operation of the target data in the first cache unit after the target operation, and taking the operation result stored in the memory copy corresponding to the first cache unit as the read result of the read operation.
5. The method of claim 4, wherein creating a corresponding memory copy for the target data further comprises:
and asynchronously executing each operation based on the ordering sequence of each operation corresponding to the target data.
6. A method according to claim 3, wherein the delete operation or write operation on the target data is performed in the following manner:
determining a target virtual storage position matched with the identification information of the target data;
and deleting or writing the target data from the actual storage space corresponding to the target virtual storage position according to a preset mapping relation between the actual storage space and the virtual storage space.
7. A method according to claim 3, wherein in response to there being at least two consecutive operations to a read operation of the target data, the read operation of the target data is performed in the following manner:
in response to judging that target data under the first cache unit is not read from the virtual storage space under the first reading operation, adding the first reading operation into an execution thread;
responding to the received second reading operation, creating a corresponding memory copy for the first cache unit, and storing an operation result of the target data in the memory copy;
And determining the operation result stored in the memory copy as an operation result corresponding to the reading operation.
8. The method of claim 1, wherein data management of the target data in the first cache unit based on the cache attribute information comprises:
determining the data validity of the target data based on the cache attribute information of the target data; the cache attribute information comprises the storage duration of the target data and/or the storage sequence of the target data in a plurality of data stored in a virtual storage space, and the cache attribute information corresponding to the target data is changed based on the operation of a first cache unit corresponding to the target data;
and erasing the target data from the first cache unit in response to the data validity of the target data indicating that the target data is invalid in the process of operating the target data.
9. A data processing apparatus, comprising:
the determining module is used for determining a corresponding first cache unit for target data in response to receiving the target data to be cached; the first cache unit carries the target data and defines cache attribute information of the target data;
The recording module is used for determining the actual storage position of the first cache unit in the actual storage space based on the data quantity corresponding to the first cache unit and recording the virtual storage position of the first cache unit in the virtual storage space in the memory; the actual storage position is positioned in one storage space of a memory storage space and a disk storage space, and the data volume ranges of the cache units stored in different storage spaces are different; the virtual storage position is used for reading the target data from the corresponding actual storage space according to the preset mapping relation after being read;
and the storage module is used for storing the first cache unit to the determined actual storage position, and asynchronously storing the cache attribute information of the first cache unit in a disk storage space so as to carry out data management on target data in the first cache unit based on the cache attribute information.
10. A computer device, comprising: a processor, a memory storing machine readable instructions executable by the processor for executing machine readable instructions stored in the memory, which when executed by the processor, perform the steps of the data processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored which, when being run by a computer device, performs the steps of the data processing method according to any one of claims 1 to 8.
CN202310752782.3A 2023-06-25 2023-06-25 Data processing method, device, computer equipment and storage medium Pending CN116991761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310752782.3A CN116991761A (en) 2023-06-25 2023-06-25 Data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310752782.3A CN116991761A (en) 2023-06-25 2023-06-25 Data processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116991761A true CN116991761A (en) 2023-11-03

Family

ID=88530983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310752782.3A Pending CN116991761A (en) 2023-06-25 2023-06-25 Data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116991761A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118312603A (en) * 2024-06-07 2024-07-09 广州讯鸿网络技术有限公司 Intelligent multi-mode customer service interaction method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118312603A (en) * 2024-06-07 2024-07-09 广州讯鸿网络技术有限公司 Intelligent multi-mode customer service interaction method and system
CN118312603B (en) * 2024-06-07 2024-08-30 广州讯鸿网络技术有限公司 Intelligent multi-mode customer service interaction method and system

Similar Documents

Publication Publication Date Title
CN101189584B (en) Managing memory pages
CN100481028C (en) Method and device for implementing data storage using cache
US20100146213A1 (en) Data Cache Processing Method, System And Data Cache Apparatus
CN105224528B (en) Big data processing method and device based on graph calculation
CN103617097B (en) File access pattern method and device
CN110196847A (en) Data processing method and device, storage medium and electronic device
CN103019887A (en) Data backup method and device
CN106980665A (en) Data dictionary implementation method, device and data dictionary management system
US8296270B2 (en) Adaptive logging apparatus and method
CN116991761A (en) Data processing method, device, computer equipment and storage medium
US11625187B2 (en) Method and system for intercepting a discarded page for a memory swap
CN109558456A (en) A kind of file migration method, apparatus, equipment and readable storage medium storing program for executing
KR100907477B1 (en) Apparatus and method for managing index of data stored in flash memory
CN114647658A (en) Data retrieval method, device, equipment and machine-readable storage medium
CN112395260B (en) Data storage method and medium
CN116339643B (en) Formatting method, formatting device, formatting equipment and formatting medium for disk array
CN116774937A (en) Data storage method, device, processing equipment and storage medium
CN108334457B (en) IO processing method and device
CN116610636A (en) Data processing method and device of file system, electronic equipment and storage medium
CN101655819B (en) Method, system and equipment for carrying out empty block reclamation for semiconductor storage medium
CN115576947A (en) Data management method and device, combined library, electronic equipment and storage medium
US6584518B1 (en) Cycle saving technique for managing linked lists
CN113805787A (en) Data writing method, device, equipment and storage medium
CN114185849A (en) File operation method, file operation system, electronic device and storage medium
KR101022001B1 (en) Flash memory system and method for managing flash memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination