CN118118553B - Intelligent sensor data caching method and system based on edge calculation - Google Patents
Intelligent sensor data caching method and system based on edge calculation Download PDFInfo
- Publication number
- CN118118553B CN118118553B CN202410534189.6A CN202410534189A CN118118553B CN 118118553 B CN118118553 B CN 118118553B CN 202410534189 A CN202410534189 A CN 202410534189A CN 118118553 B CN118118553 B CN 118118553B
- Authority
- CN
- China
- Prior art keywords
- data
- value
- weight
- packet
- compressed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000004364 calculation method Methods 0.000 title claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 35
- 230000003993 interaction Effects 0.000 claims abstract description 25
- 230000002159 abnormal effect Effects 0.000 claims abstract description 24
- 238000012163 sequencing technique Methods 0.000 claims abstract description 5
- 230000006835 compression Effects 0.000 claims description 13
- 238000007906 compression Methods 0.000 claims description 13
- 230000005540 biological transmission Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 5
- 230000003139 buffering effect Effects 0.000 claims description 4
- 230000006837 decompression Effects 0.000 claims description 4
- 238000007667 floating Methods 0.000 description 9
- 230000008901 benefit Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/565—Conversion or adaptation of application format or content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses an intelligent sensor data caching method and system based on edge calculation, and relates to the technical field of data caching, wherein the method comprises the steps of obtaining original data and uploading the original data to an edge node; grouping and calculating a grouping reference value; processing the original data, outputting compressed data packets, and setting weight values for the packets; decompressing the compressed data packet, performing data interaction, and sequencing the data; continuously receiving the compressed data packet, calculating and analyzing the weight value, and updating the cache based on the analysis result; the method and the device are used for solving the problem that the abnormal data cannot be continuously tracked and analyzed due to the lack of caching of the data after data interaction in the prior art.
Description
Technical Field
The invention relates to the technical field of data caching, in particular to an intelligent sensor data caching method and system based on edge calculation.
Background
Data caching refers to temporarily storing data in a memory or other high-speed storage medium for quick access and reading of the data; the data cache is generally applied to data which needs to be accessed frequently, such as table data in a database, page data in a Web application program and the like; the data cache can improve the speed and performance of data access, avoid repeated inquiry of databases or other data sources, and reduce the load and response time of the system.
The prior art is used for improving the intelligence of an intelligent sensor at an edge computing place, so that the interaction efficiency of the internet of things equipment and a cloud end is improved, for example, in Chinese patent with the invention publication number of CN113589824A, a distributed sensing system based on the edge computing is disclosed, the scheme is that the data processing efficiency is improved by carrying out multi-class data processing on original data, and in the prior art, the improvement method, the data is deleted after the data interaction with the cloud, and the comprehensive processing of data caching is not performed after the edge calculation is performed on the data, so that if the acquired abnormal data needs to be re-read in the subsequent process, the real-time abnormal data cannot be interacted, and therefore, the existing intelligent sensor data caching method needs to be improved.
Disclosure of Invention
The invention aims to solve at least one of the technical problems in the prior art to a certain extent, and the problems that the abnormal data cannot be continuously tracked and analyzed due to the lack of processing of caching the data after data interaction in the prior art are solved by grouping the original data, compressing single or multiple original data into a compressed data packet after calculation, setting a weight value for the compressed data packet and realizing the caching of the data and updating of the cache based on the weight value.
To achieve the above object, in a first aspect, the present invention provides an intelligent sensor data caching method based on edge calculation, including:
Step S1, acquiring original data in a first time by using an intelligent sensor, and uploading the original data to an edge node; grouping the original data, analyzing and calculating the original data in the grouping, and outputting a grouping reference value based on a calculation result;
S2, calculating and analyzing the packet reference value, processing the original data based on the analysis result, compressing the processed data, outputting a compressed data packet, and setting a weight value for the compressed data packet;
Step S3, decompressing the compressed data packet in the edge node, and caching the decompressed data to the edge node; performing data interaction by using the edge nodes, and sorting the stored data based on the weight values;
Step S4, continuously receiving the compressed data packet, calculating and analyzing the weight value, and updating the cache based on the analysis result;
further, the step S1 includes the following sub-steps:
step S101, acquiring original data in a first time by using an intelligent sensor, wherein the intelligent sensor can automatically compensate response time, and the original data comprises acquisition time;
Step S102, uploading original data to an edge node, wherein the edge node comprises an edge server and an edge gateway; the edge server can calculate the original data and buffer the processed data; the edge gateway is used for carrying out data interaction with the cloud;
Step S103, obtaining the total number of the original data in the first time, and marking the product obtained by multiplying the total number of the original data by a first dividing coefficient as a dividing interval;
Step S104, sorting the original data according to the acquisition time sequence, grouping the sorted original data according to the dividing interval, calculating the difference between the maximum value and the minimum value in the grouping, and marking the difference as a grouping reference value;
Further, the step S2 includes the following sub-steps:
step S2011, sorting the grouping reference values from left to right in an incremental manner to obtain a reference value sequence;
Step S2012, calculating a number sequence median of the reference value number sequence, dividing the reference value number sequence according to the number sequence median, marking the group positioned at the left side of the number sequence median as a sequence left group, and marking the group positioned at the right side of the number sequence median as a sequence right group;
Step S2013, marking the group corresponding to the group reference value in the left group of the sequence as a compressed data group; marking a packet corresponding to a packet reference value in the right packet of the sequence as an analysis data set;
further, the step S2 further includes the following sub-steps:
Step S2021, calculating the difference between the last bit and the first bit acquisition time in the compressed data set, and marking the difference as a corresponding time period; calculating the average value and variance of the original data in each compressed data group, and marking the average value and the variance as a stable average value and a stable variance respectively;
Step S2022, compressing the largest original data and the smallest original data in the corresponding time period, the stable mean value, the stable variance and the compressed data set, and setting a first weight value for the compressed data packet; outputting the compressed data packet;
further, the step S2 further includes the following sub-steps:
Step S2031, calculating the average value and variance of the raw data in all the compressed data sets, and marking the average value and variance as a reference average value and a reference variance respectively;
Step S2032, setting a value obtained by adding the reference variance of the first analysis factor to the reference mean as the right end point of the reference section, and setting a value obtained by subtracting the reference variance of the first analysis factor from the reference mean as the left end point of the reference section;
step S2033, determining a relationship between the original data and the reference interval in the analysis data set: when the original data are all located in the reference section, marking the analysis data set as a compressed data set, and performing step S2021 and step S2022 on the compressed data set in step S2033;
When the original data which is not located in the reference interval exists in the analysis data set, marking the original data which is located in the reference interval in the analysis data set as fluctuation data, and marking the original data which is not located in the reference interval as abnormal data;
Compressing the fluctuation data and the acquisition time of the fluctuation data, setting a second weight value for the compressed data packet, and outputting the compressed data packet;
compressing the abnormal data and the acquisition time of the abnormal data, setting a third weight value for the compressed data packet, and outputting the compressed data packet;
Further, the step S3 includes the following sub-steps:
step S3011, receiving a compressed data packet, decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
Step S3012, performing data interaction with the cloud by using the edge gateway, and subtracting one from the weight value of the data subjected to the data interaction;
further, the step S3 further includes the following sub-steps:
step S3021, sorting database data;
the ordering of database data includes:
setting the maximum third weight value and the minimum first weight value;
Sorting the data in the database according to the mode that the weight value is from big to small;
when the weight values are the same, sorting is carried out according to the mode that the acquisition time is from first to last;
further, the step S4 includes the following sub-steps:
Step S4011, continuously receiving the compressed data packet, obtaining the cache data number of the database, calculating the difference value between the maximum cache number and the cache data number of the database, and marking the difference value as an update judgment value;
step S4012, judge and update judgement value and number of data packet of compression, when the judgement value of the update is greater than or equal to the number of data packet of compression, decompress the data packet of compression, data after decompression and weight value of the data buffer memory to the database of the edge server;
step S4013, when the update judgment value is smaller than the number of the compressed data packets, performing buffer release and data buffer;
Further, the step S4013 of performing the cache release includes the following steps:
the data counted from the bottom in the database as the number of the compressed data packets is marked as interpretation data;
Acquiring a weight value of the interpretation data, and marking the weight value as a stored weight value;
acquiring the maximum weight value in the compressed data packet, and marking the maximum weight value as the maximum update value;
judging the magnitudes of the maximum update value and the stored weight value;
when the maximum update value is greater than or equal to the stored weight value, deleting the data from the bottom to the interpretation data in the database;
decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
when the maximum updating value is smaller than the stored weight value, the weight of the release judgment data is updated;
the weight update includes:
acquiring the current time and the acquisition interval of the intelligent sensor;
calculating to obtain an updated weight value by using a weight updating formula;
The weight update formula is configured as ; Wherein XQ is an updated weight, QZ is a weight of the release judgment data, DT is a current time, HT is an acquisition time of the release judgment data, JT is an acquisition interval, and K is a constant;
judging the magnitudes of the update weight and the maximum update value, and deleting the data from the bottom to the interpretation data in the database when the update weight is smaller than or equal to the maximum update value;
decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
when the updating weight is larger than the maximum updating value, decompressing the compressed data packet, deleting the bottommost data in the database, acquiring the latest data in the time or the corresponding time period, and storing the latest data in the server;
In a second aspect, the invention also provides an intelligent sensor data caching system based on edge calculation, which comprises a data processing module, a data transmission module and a cache updating module; the data processing module is used for acquiring original data, grouping the original data and calculating a grouping reference value of the grouping; the method is also used for analyzing the grouping reference value and processing the original data;
the data transmission module is used for uploading the original data to the edge node, compressing the processed data, outputting a compressed data packet and setting a weight value of the compressed data packet; the edge node is also used for carrying out data interaction;
the buffer updating module is used for decompressing the compressed data packet and buffering the decompressed data; and sequencing the cached data, calculating and analyzing the weight value, and updating the cache based on the analysis result.
The invention has the beneficial effects that: the invention divides the ordered grouping according to the median value to obtain the sequence left grouping with small floating amplitude and the sequence right grouping with large floating amplitude, calculates the average value and variance of the data of each grouping in the sequence left grouping, and compresses the calculation result into a compressed data packet, thus, the data in the grouping can be judged to be normal data because the floating amplitude of the sequence left grouping is small compared with the floating amplitude of the sequence right grouping, and the average value and variance of a plurality of original data can represent the overall trend of the data, thus the average value and variance of a plurality of original data can be changed into one data, the cache data size of the normal data in the subsequent cache can be reduced, and the intelligence and the high efficiency of the data cache and the data interaction are improved;
the method comprises the steps of setting a weight value for a compressed data packet, sorting cache data according to the weight value, acquiring time or a corresponding time period, after data interaction with a cloud end, reducing the interacted data weight value, sorting, and when a new compressed data packet is received, carrying out cache updating processing on data in a database according to the weight value of the compressed data packet and the sorting of cached data; this has the advantage that by the steps of compressing and decompressing the data, it is ensured that errors do not occur in the data buffering; by sequencing the cache data, the cache data is only required to be searched according to the sequence of the data in the database when the data is updated, so that the accuracy and the high efficiency of updating the cache data are improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
FIG. 1 is a flow chart of the steps of the method of the present invention;
FIG. 2 is a schematic block diagram of the system of the present invention;
fig. 3 is a partial data flow chart of step S4 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: in a first aspect, referring to fig. 1 and fig. 3, the present invention provides an intelligent sensor data caching method based on edge calculation, including:
step S1, acquiring original data in a first time by using an intelligent sensor, and uploading the original data to an edge node; grouping the original data, analyzing and calculating the original data in the grouping, and outputting a grouping reference value based on a calculation result; step S1 further comprises the following sub-steps:
Step S101, acquiring original data in a first time by using an intelligent sensor, wherein the intelligent sensor can automatically compensate response time, and the original data comprises acquisition time;
In the implementation, the first time is set to be 30s, the first time is set to be determined together with the acquisition interval of the intelligent sensor and the database size of the edge server, and when the acquisition interval is smaller and the database size is smaller, the first time is set to be smaller; taking a common sensor acquisition interval and the size of a database of a common server as references, setting the first time to 30 seconds, wherein data overflow is not easy to occur, and the acquired data volume is enough to support subsequent analysis;
It should be noted that, the ability of the intelligent sensor to automatically compensate for the response time means that when the acquisition interval of the intelligent sensor is 1s and a delay of 2s is allowed, when no data is acquired in the intelligent sensor 3s, a simulation is performed based on the acquired data to obtain a close data for output;
Step S102, uploading original data to an edge node, wherein the edge node comprises an edge server and an edge gateway; the edge server can calculate the original data and buffer the processed data; the edge gateway is used for carrying out data interaction with the cloud;
Step S103, obtaining the total number of the original data in the first time, and marking the product obtained by multiplying the total number of the original data by a first dividing coefficient as a dividing interval;
In the implementation, the first division coefficient is set to be 5%, so that the original data can be divided into 20 parts, when the first division coefficient is set to be too small, the number of groups is too large, the number of data in the groups is small, the referenceability is insufficient when the grouping reference value is calculated in the subsequent step, and the advantages of the edge technology cannot be reflected; when the setting is too large, it may occur that there is large floating data in each packet;
Step S104, sorting the original data according to the acquisition time sequence, grouping the sorted original data according to the dividing interval, calculating the difference between the maximum value and the minimum value in the grouping, and marking the difference as a grouping reference value;
It should be noted that, because the application scenario of the intelligent sensor is a measurement variable, the measured data will float in a normal range, when an abnormal situation occurs, the measured data float value is too large or too small, and the purpose of calculating the difference value between the maximum value and the minimum value in the packet is to judge whether the packet has abnormal data in a subsequent step through the difference value;
S2, calculating and analyzing the packet reference value, processing the original data based on the analysis result, compressing the processed data, outputting a compressed data packet, and setting a weight value for the compressed data packet;
step S2 further comprises the following sub-steps:
step S2011, sorting the grouping reference values from left to right in an incremental manner to obtain a reference value sequence;
Step S2012, calculating a number sequence median of the reference value number sequence, dividing the reference value number sequence according to the number sequence median, marking the group positioned at the left side of the number sequence median as a sequence left group, and marking the group positioned at the right side of the number sequence median as a sequence right group;
step S2013, marking the data packet corresponding to the packet reference value in the sequence left packet as a compressed data group; marking a data packet corresponding to a packet reference value in a right packet of the sequence as an analysis data packet;
Step S2021, calculating the difference between the last bit and the first bit acquisition time in the compressed data set, and marking the difference as a corresponding time period; calculating the average value and variance of the original data in each compressed data group, and marking the average value and the variance as a stable average value and a stable variance respectively;
Step S2022, compressing the largest original data and the smallest original data in the corresponding time period, the stable mean value, the stable variance and the compressed data set, and setting a first weight value for the compressed data packet; outputting the compressed data packet;
It should be noted that, since the sorting is performed in a manner of increasing the reference value of the group, the floating amplitude of the original data in the group with the left sorting is smaller, that is, the data in the group is more stable, and the reason for the stable phenomenon may be that the measured object is in a stable state during measurement; when the measured object is in an abnormal state continuously, the intelligent sensor stops working at the moment and does not need edge calculation and data caching, so that the situation is not analyzed, namely, when the original data floating amplitude is small by default, the measured object is in a stable state;
When the measured object is in a stable state, the average value, variance, maximum value and minimum value of the raw data in the group can represent the raw data in the whole group, so that the average value, variance, maximum value and minimum value of the raw data in the group are compressed and output, and the weight value with the lowest level is set, which means that the data in the group do not need to be carefully analyzed in the subsequent analysis;
The edge nodes are compressed and decompressed to avoid the situation that data with cache errors occur when the data is cached to the database due to excessive data; firstly, compression is carried out, and then decompressed data is stored, so that the accuracy of the data during data caching can be ensured;
Step S2031, calculating the average value and variance of the raw data in all the compressed data sets, and marking the average value and variance as a reference average value and a reference variance respectively;
Step S2032, setting a value obtained by adding the reference variance of the first analysis factor to the reference mean as the right end point of the reference section, and setting a value obtained by subtracting the reference variance of the first analysis factor from the reference mean as the left end point of the reference section;
step S2033, determining a relationship between the original data and the reference interval in the analysis data set: when the original data are all located in the reference section, marking the analysis data set as a compressed data set, and performing step S2021 and step S2022 on the compressed data set in step S2033;
It should be noted that, the data in the left sequence packet is directly averaged and variance is compressed, and the same operation is performed after the compressed data set is marked back, because the calculation is directly performed on the left sequence packet, the step of judging is omitted, the operation pressure of half of the server can be reduced when the judging step is performed, and the operation capability of most servers can be adapted;
When the original data which is not located in the reference interval exists in the analysis data set, marking the original data which is located in the reference interval in the analysis data set as fluctuation data, and marking the original data which is not located in the reference interval as abnormal data;
Compressing the fluctuation data and the acquisition time of the fluctuation data, setting a second weight value for the compressed data packet, and outputting the compressed data packet;
compressing the abnormal data and the acquisition time of the abnormal data, setting a third weight value for the compressed data packet, and outputting the compressed data packet;
in specific implementation, the first analysis multiple is set to be 2 times, namely, the original data floating within a variance range of plus or minus 2 times is judged to be normal data by taking the average value as a reference, and when the floating exceeds the variance of plus or minus 2 times, the original data is judged to be abnormal data;
Step S3, decompressing the compressed data packet in the edge node, and caching the decompressed data to the edge node; performing data interaction by using the edge nodes, and sorting the stored data based on the weight values; step S3 further comprises the following sub-steps:
step S3011, receiving a compressed data packet, decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
the technology adopted in compression is a lossless compression technology, and the same data as before compression can be obtained in decompression;
Step S3012, performing data interaction with the cloud by using the edge gateway, and subtracting one from the weight value of the data subjected to the data interaction;
it should be noted that, setting the first weight value, that is, setting the weight value as 1, and so on, so as to implement the subtracting one process on the weight value;
step S3021, sorting database data;
ordering the database data includes:
setting the maximum third weight value and the minimum first weight value;
Sorting the data in the database according to the mode that the weight value is from big to small;
when the weight values are the same, sorting is carried out according to the mode that the acquisition time is from first to last;
step S4, continuously receiving the compressed data packet, calculating and analyzing the weight value, and updating the cache based on the analysis result; step S4 further comprises the sub-steps of:
Step S4011, continuously receiving the compressed data packet, obtaining the cache data number of the database, calculating the difference value between the maximum cache number and the cache data number of the database, and marking the difference value as an update judgment value;
it should be noted that continuously receiving the compressed data packet means that the intelligent sensor continuously maintains an operation state and continuously uploads the original data to the edge node;
step S4012, judge and update judgement value and number of data packet of compression, when the judgement value of the update is greater than or equal to the number of data packet of compression, decompress the data packet of compression, data after decompression and weight value of the data buffer memory to the database of the edge server;
step S4013, when the update judgment value is smaller than the number of the compressed data packets, performing buffer release and data buffer;
The step S4013 of performing buffer release includes the steps of:
the data counted from the bottom in the database as the number of the compressed data packets is marked as interpretation data;
Acquiring a weight value of the interpretation data, and marking the weight value as a stored weight value;
acquiring the maximum weight value in the compressed data packet, and marking the maximum weight value as the maximum update value;
judging the magnitudes of the maximum update value and the stored weight value;
when the maximum update value is greater than or equal to the stored weight value, deleting the data from the bottom to the interpretation data in the database;
In specific implementation, the case where the maximum update value is equal to the stored weight value is: the interpretation data are fluctuation data after data interaction with the cloud, the weight value of the fluctuation data is 1, the data in the compressed data group corresponds to the compressed data packet, and the weight value is 1; the judging data are data which do not interact with the cloud, the weight value of the data is 1, and the weight value of the compressed data packet is 1, and the data in the database are lower in timeliness than the data acquired recently, so that cache updating can be performed;
The case where the maximum update value is greater than the stored weight value is: the interpretation data are data after data interaction with the cloud, at the moment, the weight value of the fluctuation data is 0, and the weight value of the compressed data packet is 1 at the minimum, so that cache update can be directly carried out;
decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
when the maximum updating value is smaller than the stored weight value, the weight of the release judgment data is updated;
in particular implementations, the case where the maximum update value is less than the stored weight is: the interpretation data are abnormal data, and data interaction is carried out with the cloud end, the weight value is 2 at the moment, the data of the compressed data packet is 1, and the timeliness of the abnormal data is required to be analyzed in the subsequent steps and the weight is required to be updated at the moment;
The weight update includes:
acquiring the current time and the acquisition interval of the intelligent sensor;
calculating to obtain an updated weight value by using a weight updating formula;
The weight update formula is configured as ; Wherein XQ is an updated weight, QZ is a weight of the release judgment data, DT is a current time, HT is an acquisition time of the release judgment data, JT is an acquisition interval, and K is a constant;
in specific implementation, k is set to 1000, DT-HT represents the time when the abnormal data is stored in the database, and k is set to 1000, which represents that the abnormal data is stored in the database for at most 1000 acquisition intervals; considering that the edge node can be configured as a small database, k cannot be set too large, and when k is too small, abnormal data can be deleted in advance, so that the function of continuously tracking and analyzing the abnormal data cannot be met in the follow-up;
judging the magnitudes of the update weight and the maximum update value, and deleting the data from the bottom to the interpretation data in the database when the update weight is smaller than or equal to the maximum update value;
decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
when the updating weight is larger than the maximum updating value, decompressing the compressed data packet, deleting the bottommost data in the database, acquiring the latest data in the time or the corresponding time period, and storing the latest data in the server;
When the weight is updated, the updated weight is still larger than the maximum updated value, and the weight values 2 and 3 are simultaneously appeared, when the weight value appears to be 3, the maximum updated value is larger than the stored weight value, so that the updated weight value appears to be larger than the maximum updated value, the weight values of the acquired compressed data packets are 1, and in order to meet the requirements of continuous analysis of abnormal values and provision of data with timeliness, the data at the bottommost part of the database is deleted, and the newly acquired compressed data packets are decompressed and cached.
Example 2: in a second aspect, referring to fig. 2, the present invention further provides an intelligent sensor data caching system based on edge calculation, which includes a data processing module, a data transmission module, and a cache update module; the data processing module is used for acquiring original data, grouping the original data and calculating a grouping reference value of the grouping; the method is also used for analyzing the grouping reference value and processing the original data;
The data transmission module is used for uploading the original data to the edge node, compressing the processed data, outputting a compressed data packet and setting a weight value of the compressed data packet; the edge node is also used for carrying out data interaction;
the buffer updating module is used for decompressing the compressed data packet and buffering the decompressed data; and sequencing the cached data, calculating and analyzing the weight value, and updating the cache based on the analysis result.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The storage medium may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Claims (2)
1. The intelligent sensor data caching method based on edge calculation is characterized by comprising the following steps of:
Step S1, acquiring original data in a first time by using an intelligent sensor, and uploading the original data to an edge node; grouping the original data, analyzing and calculating the original data in the grouping, and outputting a grouping reference value based on a calculation result;
S2, calculating and analyzing the packet reference value, processing the original data based on the analysis result, compressing the processed data, outputting a compressed data packet, and setting a weight value for the compressed data packet;
Step S3, decompressing the compressed data packet in the edge node, and caching the decompressed data to the edge node; performing data interaction by using the edge nodes, and sorting the stored data based on the weight values;
Step S4, continuously receiving the compressed data packet, calculating and analyzing the weight value, and updating the cache based on the analysis result;
the step S1 comprises the following sub-steps:
step S101, acquiring original data in a first time by using an intelligent sensor, wherein the intelligent sensor can automatically compensate response time, and the original data comprises acquisition time;
Step S102, uploading original data to an edge node, wherein the edge node comprises an edge server and an edge gateway; the edge server can calculate the original data and buffer the processed data; the edge gateway is used for carrying out data interaction with the cloud;
Step S103, obtaining the total number of the original data in the first time, and marking the product obtained by multiplying the total number of the original data by a first dividing coefficient as a dividing interval;
Step S104, sorting the original data according to the acquisition time sequence, grouping the sorted original data according to the dividing interval, calculating the difference between the maximum value and the minimum value in the grouping, and marking the difference as a grouping reference value;
The step S2 comprises the following sub-steps:
step S2011, sorting the grouping reference values from left to right in an incremental manner to obtain a reference value sequence;
Step S2012, calculating a number sequence median of the reference value number sequence, dividing the reference value number sequence according to the number sequence median, marking the group positioned at the left side of the number sequence median as a sequence left group, and marking the group positioned at the right side of the number sequence median as a sequence right group;
Step S2013, marking the group corresponding to the group reference value in the left group of the sequence as a compressed data group; marking a packet corresponding to a packet reference value in the right packet of the sequence as an analysis data set;
The step S2 further comprises the following sub-steps:
Step S2021, calculating the difference between the last bit and the first bit acquisition time in the compressed data set, and marking the difference as a corresponding time period; calculating the average value and variance of the original data in each compressed data group, and marking the average value and the variance as a stable average value and a stable variance respectively;
Step S2022, compressing the largest original data and the smallest original data in the corresponding time period, the stable mean value, the stable variance and the compressed data set, and setting a first weight value for the compressed data packet; outputting the compressed data packet;
The step S2 further comprises the following sub-steps:
Step S2031, calculating the average value and variance of the raw data in all the compressed data sets, and marking the average value and variance as a reference average value and a reference variance respectively;
Step S2032, setting a value obtained by adding the reference variance of the first analysis factor to the reference mean as the right end point of the reference section, and setting a value obtained by subtracting the reference variance of the first analysis factor from the reference mean as the left end point of the reference section;
step S2033, determining a relationship between the original data and the reference interval in the analysis data set: when the original data are all located in the reference section, marking the analysis data set as a compressed data set, and performing step S2021 and step S2022 on the compressed data set in step S2033;
When the original data which is not located in the reference interval exists in the analysis data set, marking the original data which is located in the reference interval in the analysis data set as fluctuation data, and marking the original data which is not located in the reference interval as abnormal data;
Compressing the fluctuation data and the acquisition time of the fluctuation data, setting a second weight value for the compressed data packet, and outputting the compressed data packet;
compressing the abnormal data and the acquisition time of the abnormal data, setting a third weight value for the compressed data packet, and outputting the compressed data packet;
The step S3 includes the following sub-steps:
step S3011, receiving a compressed data packet, decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
Step S3012, performing data interaction with the cloud by using the edge gateway, and subtracting one from the weight value of the data subjected to the data interaction;
the step S3 further includes the following sub-steps:
step S3021, sorting database data;
the ordering of database data includes:
setting the maximum third weight value and the minimum first weight value;
Sorting the data in the database according to the mode that the weight value is from big to small;
when the weight values are the same, sorting is carried out according to the mode that the acquisition time is from first to last;
the step S4 includes the following sub-steps:
Step S4011, continuously receiving the compressed data packet, obtaining the cache data number of the database, calculating the difference value between the maximum cache number and the cache data number of the database, and marking the difference value as an update judgment value;
step S4012, judge and update judgement value and number of data packet of compression, when the judgement value of the update is greater than or equal to the number of data packet of compression, decompress the data packet of compression, data after decompression and weight value of the data buffer memory to the database of the edge server;
step S4013, when the update judgment value is smaller than the number of the compressed data packets, performing buffer release and data buffer;
The step S4013 of performing buffer release includes the following steps:
the data counted from the bottom in the database as the number of the compressed data packets is marked as interpretation data;
Acquiring a weight value of the interpretation data, and marking the weight value as a stored weight value;
acquiring the maximum weight value in the compressed data packet, and marking the maximum weight value as the maximum update value;
judging the magnitudes of the maximum update value and the stored weight value;
when the maximum update value is greater than or equal to the stored weight value, deleting the data from the bottom to the interpretation data in the database;
decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
when the maximum updating value is smaller than the stored weight value, the weight of the release judgment data is updated;
the weight update includes:
acquiring the current time and the acquisition interval of the intelligent sensor;
calculating to obtain an updated weight value by using a weight updating formula;
The weight update formula is configured as ; Wherein XQ is an updated weight, QZ is a weight of the release judgment data, DT is a current time, HT is an acquisition time of the release judgment data, JT is an acquisition interval, and K is a constant;
judging the magnitudes of the update weight and the maximum update value, and deleting the data from the bottom to the interpretation data in the database when the update weight is smaller than or equal to the maximum update value;
decompressing the compressed data packet, and caching decompressed data and weight values of the data to a database of an edge server;
When the updating weight is larger than the maximum updating value, decompressing the compressed data packet, deleting the bottommost data in the database, acquiring the latest data in the time or the corresponding time period, and storing the latest data in the server.
2. The system suitable for the intelligent sensor data caching method based on edge calculation as claimed in claim 1, which is characterized by comprising a data processing module, a data transmission module and a cache updating module; the data processing module is used for acquiring original data, grouping the original data and calculating a grouping reference value of the grouping; the method is also used for analyzing the grouping reference value and processing the original data; the method is also used for analyzing the grouping reference value and processing the original data;
the data transmission module is used for uploading the original data to the edge node, compressing the processed data, outputting a compressed data packet and setting a weight value of the compressed data packet; the edge node is also used for carrying out data interaction;
the buffer updating module is used for decompressing the compressed data packet and buffering the decompressed data; and sequencing the cached data, calculating and analyzing the weight value, and updating the cache based on the analysis result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410534189.6A CN118118553B (en) | 2024-04-30 | 2024-04-30 | Intelligent sensor data caching method and system based on edge calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410534189.6A CN118118553B (en) | 2024-04-30 | 2024-04-30 | Intelligent sensor data caching method and system based on edge calculation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118118553A CN118118553A (en) | 2024-05-31 |
CN118118553B true CN118118553B (en) | 2024-08-20 |
Family
ID=91210877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410534189.6A Active CN118118553B (en) | 2024-04-30 | 2024-04-30 | Intelligent sensor data caching method and system based on edge calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118118553B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112534842A (en) * | 2018-08-07 | 2021-03-19 | 昕诺飞控股有限公司 | Compressive sensing system and method using edge nodes of a distributed computing network |
CN113328755A (en) * | 2021-05-11 | 2021-08-31 | 内蒙古工业大学 | Compressed data transmission method facing edge calculation |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2337257B1 (en) * | 2009-12-18 | 2012-09-05 | Canon Kabushiki Kaisha | Method and apparatus of sending encoded multimedia digital data taking into account sending deadlines |
CN107094142B (en) * | 2017-04-28 | 2020-11-27 | 电信科学技术研究院 | Method and device for decompressing and compressing uplink data |
CN117041939A (en) * | 2023-06-09 | 2023-11-10 | 南京航空航天大学 | Privacy protection active caching method based on federal learning in mobile edge calculation |
CN117097592B (en) * | 2023-10-20 | 2023-12-15 | 南京科控奇智能科技有限公司 | Edge computing gateway based on cloud computing |
-
2024
- 2024-04-30 CN CN202410534189.6A patent/CN118118553B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112534842A (en) * | 2018-08-07 | 2021-03-19 | 昕诺飞控股有限公司 | Compressive sensing system and method using edge nodes of a distributed computing network |
CN113328755A (en) * | 2021-05-11 | 2021-08-31 | 内蒙古工业大学 | Compressed data transmission method facing edge calculation |
Also Published As
Publication number | Publication date |
---|---|
CN118118553A (en) | 2024-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102511271B1 (en) | Method and device for storing and querying time series data, and server and storage medium therefor | |
Lim et al. | Towards Accurate and Fast Evaluation of {Multi-Stage} Log-structured Designs | |
CN110602178B (en) | Method for calculating and processing temperature sensor data based on edge compression | |
CN108810140A (en) | Classification storage method based on dynamic threshold adjustment in cloud storage system | |
CN111522846B (en) | Data aggregation method based on time sequence intermediate state data structure | |
CN102236674A (en) | Method and device for updating index page | |
CN105553937A (en) | System and method for data compression | |
CN116915259B (en) | Bin allocation data optimized storage method and system based on internet of things | |
CN113660113B (en) | Self-adaptive sparse parameter model design and quantization transmission method for distributed machine learning | |
CN109413694B (en) | Small cell caching method and device based on content popularity prediction | |
Przymus et al. | Dynamic compression strategy for time series database using GPU | |
CN114817651B (en) | Data storage method, data query method, device and equipment | |
CN107707680A (en) | A kind of distributed data load-balancing method and system based on node computing capability | |
CN112613271A (en) | Data paging method and device, computer equipment and storage medium | |
Zhu et al. | Massive Files Prefetching Model Based on LSTM Neural Network with Cache Transaction Strategy. | |
CN115470079A (en) | System fault early warning method and device and server | |
CN118118553B (en) | Intelligent sensor data caching method and system based on edge calculation | |
CN117520264A (en) | Product full life cycle file management system | |
CN112988892B (en) | Distributed system hot spot data management method | |
CN106612329B (en) | Caching method and device | |
CN117527708A (en) | Optimized transmission method and system for enterprise data link based on data flow direction | |
CN117076466A (en) | Rapid data indexing method for large archive database | |
US10983888B1 (en) | System and method for generating dynamic sparse exponential histograms | |
CN116578646A (en) | Time sequence data synchronization method, device, equipment and storage medium | |
CN110175185A (en) | A kind of self-adaptive non-loss compression based on time series data distribution characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |