CN106331148A - Cache management method and cache management device for data reading by clients - Google Patents
Cache management method and cache management device for data reading by clients Download PDFInfo
- Publication number
- CN106331148A CN106331148A CN201610826658.7A CN201610826658A CN106331148A CN 106331148 A CN106331148 A CN 106331148A CN 201610826658 A CN201610826658 A CN 201610826658A CN 106331148 A CN106331148 A CN 106331148A
- Authority
- CN
- China
- Prior art keywords
- data
- client
- data block
- memory space
- shared memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Transfer Between Computers (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a cache management method and a cache management device for data reading by clients, which are applied to a server. The method comprises the following steps: after receiving a data reading request sent by a client, judging whether data requested by the client is stored in a shared memory space pre-established in a server; if the data is stored in the shared memory space, returning the data requested by the client to the client from the shared memory space; if the data is not stored in the shared memory space, returning the address of a blank data block in the shared memory space to enable the client to write data read from back-end storage equipment into the blank data block and return state update information; labeling the blank data block as a cache data block according to the state update information; and returning the data requested by the client to the client from the shared memory space. The phenomenon that the same data is stored repeatedly is avoided, and waste of memory space resources is reduced. Moreover, repeated reading of data in the back-end storage equipment is avoided, waste of bandwidth resources is reduced, and the use efficiency of the memory is high.
Description
Technical field
The present invention relates to data reading techniques field, particularly relate to cache management side when a kind of client data reads
Method and device thereof.
Background technology
When being engaged in the client software exploitation of distributed storage product, often using the concept of caching, it is journey in fact
The block space that sequence is applied in internal memory, the data that user will be write or just read are kept in, and both can improve and write
Enter performance, can be rapidly returned to when user's data to accessing recently initiate request again again, reduce and postpone.
And for read operation, when there is some client process simultaneously, each client process all can open up one
Spatial cache keeps in the data from back-end storage device, so may result in same Data duplication and is stored in different
In memory headroom, not only cause the waste of memory headroom resource, and same Data duplication reads and will also result in bandwidth
Waste, the service efficiency of internal memory is low.
Therefore, buffer memory management method when how to provide the client data that a kind of internal memory service efficiency is high to read and dress thereof
Putting is those skilled in the art's problems that solution is presently required.
Summary of the invention
It is an object of the invention to provide buffer memory management method when a kind of client data reads and device thereof, it is possible to avoid
The situation that identical data repeatedly store occurs, reduces the waste of memory headroom resource;And it can be avoided that repeat to read rear end and deposit
Storage equipment, reduces the waste of bandwidth resources, and the service efficiency of internal memory is high.
For solving above-mentioned technical problem, the invention provides buffer memory management method when a kind of client data reads, use
In server, including:
Step s101: after receiving the data read request that client sends, it is judged that pre-build in described server is total to
The data that the request of described client is read whether are stored in enjoying memory headroom;If so, step s104 is entered;If it is not, enter step
Rapid s102;
Step s102: return to the address of a Blank data block in described shared memory space, will for described client
In the data that read out in back-end storage device write described Blank data block and return state updating information;
Step s103: described Blank data block is labeled as caching data block according to described state updating information;Enter step
Rapid s104;
Step s104: the data described client asked in described shared memory space return described client.
Preferably, described step s104 process particularly as follows:
The data that described client is asked are copied in the local process space of described client, for described client pair
Described data are transferred by the front end applications answered.
Preferably, also include:
When the vital stage that there are caching data block in judging described shared memory space exceeds preset time threshold, release
Put the data in described caching data block and be marked as Blank data block.
Preferably, the described vital stage is specially the time being not called upon continuously.
Preferably, the described vital stage is specially the existence time of described caching data block.
Preferably, the address of described Blank data block is the address offset of described shared memory space and described blank number
Page number ID according to the blank page residing for block.
For solving above-mentioned technical problem, present invention also offers cache management device when a kind of client data reads,
For server, including:
Judge module, after receiving the data read request that client sends, it is judged that pre-build in described server
Shared memory space in whether store described client request read data;If so, trigger data returns module;If
No, trigger address and return module;
Described address returns module, for returning to the address of a Blank data block in described shared memory space, supplies
The data read out in back-end storage device are write in described Blank data block by described client and the state that returns updates
Information;
Mark module, for described Blank data block being labeled as caching data block according to described state updating information, and
Trigger described data and return module;
Described data return module, return institute for the data described client asked in described shared memory space
State client.
Preferably, also include:
Burin-in process module, exceeds for there are the vital stage of caching data block in judging described shared memory space
During preset time threshold, discharge the data in described caching data block, and send release information to described mark module;
Described caching data block after described mark module is additionally operable to release is labeled as Blank data block.
The invention provides buffer memory management method when a kind of client data reads and device thereof, receive client and send
Data read request after, whether stored corresponding data in first judging the shared memory space pre-build, if having, directly
Returning these data to client, if not having, just needing back-end storage device is carried out digital independent, and the data that will read out
Store to shared memory space.Visible, for multiple clients, the present invention need not individually open up memory headroom, keeps away simultaneously
Exempt from the situation appearance that identical data repeatedly store, decrease the waste of memory headroom resource;And it can be avoided that repeat to read
Back-end storage device, decreases the waste of bandwidth resources, and the service efficiency of internal memory is high.
Accompanying drawing explanation
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, below will be to institute in prior art and embodiment
The accompanying drawing used is needed to be briefly described, it should be apparent that, the accompanying drawing in describing below is only some enforcements of the present invention
Example, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to obtains according to these accompanying drawings
Obtain other accompanying drawing.
The flow chart of the process of buffer memory management method when Fig. 1 reads for a kind of client data that the present invention provides;
The structural representation of cache management device when Fig. 2 reads for a kind of client data that the present invention provides.
Detailed description of the invention
The core of the present invention is to provide buffer memory management method when a kind of client data reads and device thereof, it is possible to avoid
The situation that identical data repeatedly store occurs, reduces the waste of memory headroom resource;And it can be avoided that repeat to read rear end and deposit
Storage equipment, reduces the waste of bandwidth resources, and the service efficiency of internal memory is high.
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
The a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under not making creative work premise, broadly falls into the scope of protection of the invention.
The invention provides buffer memory management method when a kind of client data reads, for server, see Fig. 1 institute
Show, the flow chart of the process of buffer memory management method when Fig. 1 reads for a kind of client data that the present invention provides;The method bag
Include:
Step s101: after receiving the data read request that client sends, it is judged that sharing of pre-building in server is interior
The data that client request is read whether are stored in depositing space;If so, step s104 is entered;If it is not, enter step s102;
Step s102: the address of a Blank data block in return shared memory space, will deposit from rear end for client
The data write Blank data block read out in storage equipment is interior and returns state updating information;
Step s103: Blank data block is labeled as caching data block according to state updating information;Enter step s104;
Step s104: in shared memory space, the data that client is asked are returned client.
Wherein, step s104 process particularly as follows:
The data that client is asked are copied in the local process space of client, for the front end applications that client is corresponding
Data are transferred.
It is understood that front end applications directly cannot obtain data from shared memory space, need first will share
Part data block in memory headroom copies in local process space, and front end applications can obtain from the reading process initiated
Corresponding data.
It addition, the size of the shared memory space preset and internal memory number of pages all can customize, the present invention does not limits, when one
The time that individual shared memory space is not quoted by any process, shared memory space can be beyond when presetting blank time threshold value
System reclaims, and this operation is that this operation is the most excellent certainly in order to avoid useless shared memory space long-time committed memory resource
Selecting scheme, the present invention does not limit whether use aforesaid operations, does not the most limit the concrete numerical value of default blank time threshold value.
As preferably, the method also includes:
When judging that the vital stage that there are caching data block in shared memory space is beyond preset time threshold, release is slow
Data in deposit data block are also marked as Blank data block.
Wherein, above-mentioned judge that operation can be periodically to carry out, the i.e. periodic life to each caching data block
Phase is detected;It is understood that for carrying out above-mentioned judgement, need the vital stage to each caching data block to carry out time note
Record, this time record is completed by timer, therefore can also when reaching preset time threshold when the time of timer record,
Automatically the operation of release caching data block is triggered.
It is further known that, the vital stage here is specially the time being not called upon continuously.
It addition, the vital stage is specially the existence time of caching data block.The definition of concrete employing which kind of vital stage above is originally
Invention does not limit, it is possible to use other times as the vital stage.
It is understood that by discharging aging caching data block, it is possible to avoid the caching data block that the vital stage is long
Sustainable existence, long-time committed memory resource and situation that the new data that causes cannot write occurs, i.e. aforesaid operations can be use up
The renewal real-time of data in shared memory space can be can guarantee that.
Concrete, the address of Blank data block here is address offset and the Blank data block institute of shared memory space
The page number ID of the blank page at place.
The invention provides buffer memory management method when a kind of client data reads, receive the data reading that client sends
After taking request, in first judging the shared memory space pre-build, whether storing corresponding data, if having, directly having returned this number
According to client, if not having, just needing back-end storage device is carried out digital independent, and the data read out being stored to altogether
Enjoy in memory headroom.Visible, for multiple clients, the present invention need not individually open up memory headroom, avoids identical simultaneously
Data situation about repeatedly storing occur, decrease the waste of memory headroom resource;And it can be avoided that repeat to read rear end storage
Equipment, decreases the waste of bandwidth resources, and the service efficiency of internal memory is high.
Present invention also offers cache management device when a kind of client data reads, for server, see Fig. 2 institute
Show, the structural representation of cache management device when Fig. 2 reads for a kind of client data that the present invention provides.This device bag
Include:
Judge module 11, after receiving the data read request that client sends, it is judged that pre-build in server
The data that client request is read whether are stored in shared memory space;If so, trigger data returns module 13;If it is not, touch
Send out address and return module 12;
Address returns module 12, for returning to the address of a Blank data block in shared memory space, for client
By the data read out in back-end storage device write Blank data block in and return state updating information;
Mark module 14, for Blank data block being labeled as caching data block according to state updating information, and triggers number
According to returning module 13;
Data return module 13, for the data that client is asked being returned client in shared memory space.
As preferably, this device also includes:
Burin-in process module 15, for there are the vital stage of caching data block beyond pre-in judging shared memory space
If during time threshold, the data in release caching data block, and send release information to mark module 14;
Caching data block after mark module 14 is additionally operable to release is labeled as Blank data block.
It is understood that front end applications sends read requests to after client, client need first with above-mentioned caching
The socket (socket) of managing device sets up and connects.
As preferably, client can also determine the existing state of the other side with cache management device by heart beating feedback,
Concrete mode is client and cache management device and periodically sends default signal to the other side, if not receiving square signal
Persistent period exceeds preset time period, then judge that the other side cannot work, and then send feedback information to front end applications.Certainly,
These are only preferred embodiment, it is possible to adopt and carry out heart beating feedback in other ways.
The invention provides cache management device when a kind of client data reads, receive the data reading that client sends
After taking request, in first judging the shared memory space pre-build, whether storing corresponding data, if having, directly having returned this number
According to client, if not having, just needing back-end storage device is carried out digital independent, and the data read out being stored to altogether
Enjoy in memory headroom.Visible, for multiple clients, the present invention need not individually open up memory headroom, avoids identical simultaneously
Data situation about repeatedly storing occur, decrease the waste of memory headroom resource;And it can be avoided that repeat to read rear end storage
Equipment, decreases the waste of bandwidth resources, and the service efficiency of internal memory is high.
It should be noted that in this manual, term " includes ", " comprising " or its any other variant are intended to
Comprising of nonexcludability, so that include that the process of a series of key element, method, article or equipment not only include that those are wanted
Element, but also include other key elements being not expressly set out, or also include for this process, method, article or equipment
Intrinsic key element.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that
Including process, method, article or the equipment of described key element there is also other identical element.
Described above to the disclosed embodiments, makes professional and technical personnel in the field be capable of or uses the present invention.
Multiple amendment to these embodiments will be apparent from for those skilled in the art, as defined herein
General Principle can realize without departing from the spirit or scope of the present invention in other embodiments.Therefore, the present invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and principles disclosed herein and features of novelty phase one
The widest scope caused.
Claims (8)
1. buffer memory management method when client data reads, for server, it is characterised in that including:
Step s101: after receiving the data read request that client sends, it is judged that sharing of pre-building in described server is interior
The data that the request of described client is read whether are stored in depositing space;If so, step s104 is entered;If it is not, entrance step
s102;
Step s102: return to the address of a Blank data block in described shared memory space, will be from rear for described client
The data described Blank data block of write read out in end storage device is interior and returns state updating information;
Step s103: described Blank data block is labeled as caching data block according to described state updating information;Enter step
s104;
Step s104: the data described client asked in described shared memory space return described client.
Method the most according to claim 1, it is characterised in that the process of described step s104 particularly as follows:
The data that described client is asked are copied in the local process space of described client, corresponding for described client
Described data are transferred by front end applications.
Method the most according to claim 1, it is characterised in that also include:
When the vital stage that there are caching data block in judging described shared memory space exceeds preset time threshold, discharge institute
State the data in caching data block and be marked as Blank data block.
Method the most according to claim 3, it is characterised in that the described vital stage is specially the time being not called upon continuously.
Method the most according to claim 3, it is characterised in that the described vital stage is specially the existence of described caching data block
Time.
Method the most according to claim 1, it is characterised in that the address of described Blank data block is that described shared drive is empty
Between address offset and the page number ID of blank page residing for described Blank data block.
7. cache management device when client data reads, for server, it is characterised in that including:
Judge module, after receiving the data read request that client sends, it is judged that pre-build in described server is total to
The data that the request of described client is read whether are stored in enjoying memory headroom;If so, trigger data returns module;If it is not, touch
Send out address and return module;
Described address returns module, for returning to the address of a Blank data block in described shared memory space, for described
The data read out in back-end storage device are write in described Blank data block and return state updating information by client;
Mark module, for described Blank data block being labeled as caching data block according to described state updating information, and triggers
Described data return module;
Described data return module, return described visitor for the data described client asked in described shared memory space
Family end.
Device the most according to claim 7, it is characterised in that also include:
Burin-in process module, for there are the vital stage of caching data block beyond presetting in judging described shared memory space
During time threshold, discharge the data in described caching data block, and send release information to described mark module;
Described caching data block after described mark module is additionally operable to release is labeled as Blank data block.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610826658.7A CN106331148A (en) | 2016-09-14 | 2016-09-14 | Cache management method and cache management device for data reading by clients |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610826658.7A CN106331148A (en) | 2016-09-14 | 2016-09-14 | Cache management method and cache management device for data reading by clients |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106331148A true CN106331148A (en) | 2017-01-11 |
Family
ID=57788063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610826658.7A Pending CN106331148A (en) | 2016-09-14 | 2016-09-14 | Cache management method and cache management device for data reading by clients |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106331148A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107360245A (en) * | 2017-07-28 | 2017-11-17 | 郑州云海信息技术有限公司 | A kind of local cache method and device based on lease lock mechanism |
CN108897495A (en) * | 2018-06-28 | 2018-11-27 | 北京五八信息技术有限公司 | Buffering updating method, device, buffer memory device and storage medium |
CN108958572A (en) * | 2017-05-25 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Message data processing method, device, storage medium and computer equipment |
CN109309631A (en) * | 2018-08-15 | 2019-02-05 | 新华三技术有限公司成都分公司 | A kind of method and device based on universal network file system write-in data |
CN109992402A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Internal memory processing method and device, electronic equipment, computer readable storage medium |
CN110018902A (en) * | 2018-01-10 | 2019-07-16 | 广东欧珀移动通信有限公司 | Internal memory processing method and device, electronic equipment, computer readable storage medium |
CN110018900A (en) * | 2018-01-10 | 2019-07-16 | 广东欧珀移动通信有限公司 | Internal memory processing method and device, electronic equipment, computer readable storage medium |
CN110110256A (en) * | 2018-01-17 | 2019-08-09 | 阿里巴巴集团控股有限公司 | Data processing method, device, electronic equipment and storage medium |
CN110990480A (en) * | 2018-09-30 | 2020-04-10 | 北京国双科技有限公司 | Data processing method and device |
CN111143244A (en) * | 2019-12-30 | 2020-05-12 | 海光信息技术有限公司 | Memory access method of computer equipment and computer equipment |
CN111367687A (en) * | 2020-02-28 | 2020-07-03 | 罗普特科技集团股份有限公司 | Inter-process data communication method and device |
CN111382142A (en) * | 2020-03-04 | 2020-07-07 | 海南金盘智能科技股份有限公司 | Database operation method, server and computer storage medium |
CN114691051A (en) * | 2022-05-30 | 2022-07-01 | 恒生电子股份有限公司 | Data processing method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101046807A (en) * | 2006-03-31 | 2007-10-03 | 华为技术有限公司 | Method and device of storage data readed |
CN101202758A (en) * | 2006-12-14 | 2008-06-18 | 英业达股份有限公司 | Method for network virtual storage of multi-client terminals |
CN101377788A (en) * | 2008-09-28 | 2009-03-04 | 中国科学院计算技术研究所 | Method and system of caching management in cluster file system |
CN102006330A (en) * | 2010-12-01 | 2011-04-06 | 北京瑞信在线系统技术有限公司 | Distributed cache system, data caching method and inquiring method of cache data |
CN102984276A (en) * | 2012-12-17 | 2013-03-20 | 北京奇虎科技有限公司 | Distribution device and distribution method for distributing multiple socket servers |
CN105183389A (en) * | 2015-09-15 | 2015-12-23 | 北京金山安全软件有限公司 | Data hierarchical management method and device and electronic equipment |
US20160134672A1 (en) * | 2014-11-11 | 2016-05-12 | Qualcomm Incorporated | Delivering partially received segments of streamed media data |
US20160150048A1 (en) * | 2014-11-24 | 2016-05-26 | Facebook, Inc. | Prefetching Location Data |
-
2016
- 2016-09-14 CN CN201610826658.7A patent/CN106331148A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101046807A (en) * | 2006-03-31 | 2007-10-03 | 华为技术有限公司 | Method and device of storage data readed |
CN101202758A (en) * | 2006-12-14 | 2008-06-18 | 英业达股份有限公司 | Method for network virtual storage of multi-client terminals |
CN101377788A (en) * | 2008-09-28 | 2009-03-04 | 中国科学院计算技术研究所 | Method and system of caching management in cluster file system |
CN102006330A (en) * | 2010-12-01 | 2011-04-06 | 北京瑞信在线系统技术有限公司 | Distributed cache system, data caching method and inquiring method of cache data |
CN102984276A (en) * | 2012-12-17 | 2013-03-20 | 北京奇虎科技有限公司 | Distribution device and distribution method for distributing multiple socket servers |
US20160134672A1 (en) * | 2014-11-11 | 2016-05-12 | Qualcomm Incorporated | Delivering partially received segments of streamed media data |
US20160150048A1 (en) * | 2014-11-24 | 2016-05-26 | Facebook, Inc. | Prefetching Location Data |
CN105183389A (en) * | 2015-09-15 | 2015-12-23 | 北京金山安全软件有限公司 | Data hierarchical management method and device and electronic equipment |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108958572A (en) * | 2017-05-25 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Message data processing method, device, storage medium and computer equipment |
CN108958572B (en) * | 2017-05-25 | 2022-12-16 | 腾讯科技(深圳)有限公司 | Message data processing method, device, storage medium and computer equipment |
CN107360245B (en) * | 2017-07-28 | 2020-10-16 | 苏州浪潮智能科技有限公司 | Local caching method and device based on lease lock mechanism |
CN107360245A (en) * | 2017-07-28 | 2017-11-17 | 郑州云海信息技术有限公司 | A kind of local cache method and device based on lease lock mechanism |
CN109992402A (en) * | 2017-12-29 | 2019-07-09 | 广东欧珀移动通信有限公司 | Internal memory processing method and device, electronic equipment, computer readable storage medium |
CN109992402B (en) * | 2017-12-29 | 2021-07-09 | Oppo广东移动通信有限公司 | Memory processing method and device, electronic equipment and computer readable storage medium |
CN110018902A (en) * | 2018-01-10 | 2019-07-16 | 广东欧珀移动通信有限公司 | Internal memory processing method and device, electronic equipment, computer readable storage medium |
CN110018900A (en) * | 2018-01-10 | 2019-07-16 | 广东欧珀移动通信有限公司 | Internal memory processing method and device, electronic equipment, computer readable storage medium |
CN110018900B (en) * | 2018-01-10 | 2023-01-24 | Oppo广东移动通信有限公司 | Memory processing method and device, electronic equipment and computer readable storage medium |
CN110110256A (en) * | 2018-01-17 | 2019-08-09 | 阿里巴巴集团控股有限公司 | Data processing method, device, electronic equipment and storage medium |
CN108897495A (en) * | 2018-06-28 | 2018-11-27 | 北京五八信息技术有限公司 | Buffering updating method, device, buffer memory device and storage medium |
CN108897495B (en) * | 2018-06-28 | 2023-10-03 | 北京五八信息技术有限公司 | Cache updating method, device, cache equipment and storage medium |
CN109309631A (en) * | 2018-08-15 | 2019-02-05 | 新华三技术有限公司成都分公司 | A kind of method and device based on universal network file system write-in data |
CN110990480A (en) * | 2018-09-30 | 2020-04-10 | 北京国双科技有限公司 | Data processing method and device |
CN111143244A (en) * | 2019-12-30 | 2020-05-12 | 海光信息技术有限公司 | Memory access method of computer equipment and computer equipment |
CN111367687A (en) * | 2020-02-28 | 2020-07-03 | 罗普特科技集团股份有限公司 | Inter-process data communication method and device |
CN111382142B (en) * | 2020-03-04 | 2023-06-20 | 海南金盘智能科技股份有限公司 | Database operation method, server and computer storage medium |
CN111382142A (en) * | 2020-03-04 | 2020-07-07 | 海南金盘智能科技股份有限公司 | Database operation method, server and computer storage medium |
CN114691051B (en) * | 2022-05-30 | 2022-10-04 | 恒生电子股份有限公司 | Data processing method and device |
CN114691051A (en) * | 2022-05-30 | 2022-07-01 | 恒生电子股份有限公司 | Data processing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106331148A (en) | Cache management method and cache management device for data reading by clients | |
CN103593147B (en) | A kind of method and device of digital independent | |
US8909887B1 (en) | Selective defragmentation based on IO hot spots | |
CN105224255B (en) | A kind of storage file management method and device | |
CN105302840B (en) | A kind of buffer memory management method and equipment | |
CN106776368A (en) | Buffer memory management method, apparatus and system during a kind of digital independent | |
CN109213699B (en) | Metadata management method, system, equipment and computer readable storage medium | |
US9733833B2 (en) | Selecting pages implementing leaf nodes and internal nodes of a data set index for reuse | |
CN104899156A (en) | Large-scale social network service-oriented graph data storage and query method | |
CN105279163A (en) | Buffer memory data update and storage method and system | |
CN103312624A (en) | Message queue service system and method | |
CN107632791A (en) | The distribution method and system of a kind of memory space | |
CN105045723A (en) | Processing method, apparatus and system for cached data | |
CN107256196A (en) | The caching system and method for support zero-copy based on flash array | |
CN106326239A (en) | Distributed file system and file meta-information management method thereof | |
CN105302830A (en) | Map tile caching method and apparatus | |
CN102474531A (en) | Address server | |
CN104933051B (en) | File storage recovery method and device | |
US20170123975A1 (en) | Centralized distributed systems and methods for managing operations | |
CN105354193A (en) | Caching method, query method, caching apparatus and query apparatus for database data | |
CN108319634B (en) | Directory access method and device for distributed file system | |
CN103778120A (en) | Global file identification generation method, generation device and corresponding distributed file system | |
CN109086462A (en) | The management method of metadata in a kind of distributed file system | |
CN110502457B (en) | Metadata storage method and device | |
CN102970349B (en) | A kind of memory load equalization methods of DHT network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170111 |