Nothing Special   »   [go: up one dir, main page]

CN105577804A - Big data processing method and processing device - Google Patents

Big data processing method and processing device Download PDF

Info

Publication number
CN105577804A
CN105577804A CN201511008580.XA CN201511008580A CN105577804A CN 105577804 A CN105577804 A CN 105577804A CN 201511008580 A CN201511008580 A CN 201511008580A CN 105577804 A CN105577804 A CN 105577804A
Authority
CN
China
Prior art keywords
data
server
request
user side
pending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511008580.XA
Other languages
Chinese (zh)
Other versions
CN105577804B (en
Inventor
郭浒生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Midea Intelligent Technologies Co Ltd
Original Assignee
Hefei Hualing Co Ltd
Midea Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Hualing Co Ltd, Midea Group Co Ltd filed Critical Hefei Hualing Co Ltd
Priority to CN201511008580.XA priority Critical patent/CN105577804B/en
Publication of CN105577804A publication Critical patent/CN105577804A/en
Application granted granted Critical
Publication of CN105577804B publication Critical patent/CN105577804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a big data processing method and processing device. The big data processing method comprises following steps: sending a data request to a server so that the server can feed back corresponding to-be-processed data according to the data request; receiving the to-be-processed data, analyzing the to-be-processed data to generate processed data; and returning the processed data to the server. Though the technical solution of the invention, the massive data stored in the server can be processed through a client; the server itself does not need to process the massive data; therefore, the hardware sources of the server are saved; the concurrent processing capability of the server is improved; meanwhile, the client can process the massive data stored in the server according to the demand; the problems of resource waste in the related techniques are avoided; and the resource waste is resulted from carrying out overall processing to the massive data by the server.

Description

The processing method of large data and processing unit
Technical field
The present invention relates to technical field of data processing, in particular to a kind of processing method of large data and a kind of processing unit of large data.
Background technology
In the related, server, when processing large data, adopts following two kinds of methods usually:
Method one: resolution data while receiving reported data, what current most of producer reported is all serial ports director data, and the information comprised is also more, resolves the computer resource that these informational needs consumption are certain;
Method two: will the program of reported data and process reported data be received separately, reception program directly receives and stores reported data, and do not do the process that other are any, handling procedure resolves reported data separately according to rule, and the result of parsing is stored, inquire about for user.
For these two kinds of methods, there is following defect:
1, server needs the data that process receives while receiving data, and this just needs to consume certain computer resource, and therefore concurrent processing reports the quantity of refrigerator to reduce, but also easily forms the situation of blocking;
2, handling procedure system needs to process a large amount of data, although can Distributed Calculation, needs to drop into certain server hardware resource, if data volume is larger, has longer time delay.In addition, according to sixteen principles, there are a large amount of data nobody inquiry can be gone to use in system, therefore, totally go to process these data and not only can consume server hardware resource, also can cause the wasting of resources.
Therefore, how while saving server hardware resource, the concurrent processing ability improving server becomes technical problem urgently to be resolved hurrily.
Summary of the invention
The present invention is intended at least to solve one of technical problem existed in prior art or correlation technique.
For this reason, one object of the present invention is the processing scheme proposing a kind of large data newly, can by the mass data stored in user side processing server, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique and mass data is processed totally and causes the problem of the wasting of resources.
For achieving the above object, embodiment according to a first aspect of the invention, proposes a kind of processing method of large data, is applicable to user side, comprise: send request of data to server, for the pending data that described server is corresponding according to described request of data feedback; Receive described pending data, and described pending data are resolved, with the data after generating process; By the data back after described process to described server.
The processing method of large according to an embodiment of the invention data, user side is by sending request of data to server, and the pending data of reception server feedback, and treat deal with data and resolve, with the data after generating process, by the data back after process to server, the data of server according to the actual requirements after stores processor, further carry out Data Update according to the data after process, achieve the mass data by storing in user side processing server, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing method of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, send the step of request of data to server, specifically comprising: when detecting that described user side is opened, sending described request of data to described server; And/or when detecting that the parsing of the target in described user side thread is triggered, send described request of data to described server; And/or when the query manipulation of user being detected, send described request of data to described server.
The processing method of large according to an embodiment of the invention data, user side can send request of data to server in real time according to demand, particularly, when can be arranged on user side unlatching, request of data is sent to server, when the parsing thread for resolution data that also can be arranged on user installation is triggered, request of data is sent to server, when also can be arranged on query manipulation user being detected, such as, the clicking operation etc. of the inquire button on user to user end interface detected, request of data is sent to server, user side is sent under what conditions to the setting of request of data to server, can be arranged by the manufacturer of user side, also can come self-defined by user.
According to one embodiment of present invention, before described pending data are resolved, also comprising: when the query manipulation of user being detected, sending described request of data to described server, and receive the described pending data that described server feeds back according to described request of data; Judge that whether all data in described pending data are resolved; When judging that all data in described pending data are resolved, all data in described pending data being shown, checks for described user; When all data judged in described pending data or partial data are not resolved, perform the step that described pending data are resolved.
The processing method of large according to an embodiment of the invention data, when user inquires about certain data by user side, these data stored in possible server are by user side dissection process mistake, now by judging that whether all data in pending data are resolved, and when judging that all data in described pending data are resolved, the all data treated in deal with data are shown, and when all data judged in pending data or partial data are not resolved, treat deal with data again to resolve, both avoided and repeated resolution had been carried out to parsed data, reduce the operating load of user side, improve the displaying efficiency to user's data query simultaneously.
Embodiment according to a second aspect of the invention, proposes a kind of processing method of large data, is applicable to server, comprises: receive the request of data that each user side sends; According to described request of data, to the pending data that described each client feeds back is corresponding, for described each user side, the described pending data received are resolved, obtain the data after processing, and by the data back after described process to described server; Receive the data after the described process of described each user side passback; Data Update is carried out according to the data after described process.
The processing method of large according to an embodiment of the invention data, by receiving the request of data that each user side sends, and according to request of data to pending data corresponding to each client feeds back, and the data received after the process of each user side passback, to carry out Data Update according to the data after process, achieve and the mass data stored in server is distributed to each user side, processed by the data of each user side to demand, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing method of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, according to described request of data, to the step of pending data corresponding to described each client feeds back, specifically comprise: according to default feedback strategy, to the pending data that described each client feeds back is corresponding; Wherein, default feedback strategy comprises the data category of feedback and/or the data bulk of feedback.
The processing method of large according to an embodiment of the invention data, when receiving the request of data of each user side, server can according to default feedback strategy to pending data corresponding to each client feeds back, such as, in advance the pending data in server are divided into personal data, alive data and other data etc., server is when receiving the request instruction of user side, first personal data corresponding for this user side are fed back to this user side to resolve, from alive data, randomly draw a certain amount of (such as 40) data feedback again resolve to user side, personal data or alive data all resolved or countless according to time, from other data, randomly draw a certain amount of (such as 40) data feedback resolve to user side.Wherein, default feedback strategy includes but not limited to the data category of feedback, the data bulk of feedback, and the data category of feedback can be distinguished from multiple dimension.
According to one embodiment of present invention, before the request of data receiving the transmission of each user side, also comprise: receive the running state data that home appliance is uploaded; Store described running state data, using by described running state data as described pending data.
The processing method of large according to an embodiment of the invention data, server can receive the running state data that magnanimity home appliance is uploaded, and only store these running state data and carry out, to treat that user side carries out dissection process, self does not carry out dissection process to it, thus realizes magnanimity home appliance and processed by mass users end.
Embodiment according to a third aspect of the invention we, propose a kind of processing unit of large data, be applicable to user side, comprise: the first transmitting element, for sending request of data to server, for the pending data that described server is corresponding according to described request of data feedback; Receiving element, for receiving described pending data; Resolution unit, for resolving described pending data, with the data after generating process; Second transmitting element, for by the data back after described process to described server.
The processing unit of large according to an embodiment of the invention data, user side is by sending request of data to server, and the pending data of reception server feedback, and treat deal with data and resolve, with the data after generating process, by the data back after process to server, the data of server according to the actual requirements after stores processor, further carry out Data Update according to the data after process, achieve the mass data by storing in user side processing server, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing unit of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, described first transmitting element specifically for: when detecting that described user side is opened, send described request of data to described server; And/or when detecting that the parsing of the target in described user side thread is triggered, send described request of data to described server; And/or when the query manipulation of user being detected, send described request of data to described server.
The processing unit of large according to an embodiment of the invention data, user side can send request of data to server in real time according to demand, particularly, when can be arranged on user side unlatching, request of data is sent to server, when the parsing thread for resolution data that also can be arranged on user installation is triggered, request of data is sent to server, when also can be arranged on query manipulation user being detected, such as, the clicking operation etc. of the inquire button on user to user end interface detected, request of data is sent to server, user side is sent under what conditions to the setting of request of data to server, can be arranged by the manufacturer of user side, also can come self-defined by user.
According to one embodiment of present invention, also comprise: judging unit, for the query manipulation of user being detected when described first transmitting element, described request of data is sent to described server, and described receiving element is when receiving the described pending data that described server feeds back according to described request of data, judge that whether all data in described pending data are resolved; Display unit, time resolved for all data in the described pending data of described judging unit judgement, show all data in described pending data, checks for described user; Described resolution unit specifically for, when described judging unit judges that all data in described pending data or partial data are not resolved, described pending data are resolved.
The processing unit of large according to an embodiment of the invention data, when user inquires about certain data by user side, these data stored in possible server are by user side dissection process mistake, now by judging that whether all data in pending data are resolved, and when judging that all data in pending data are resolved, the all data treated in deal with data are shown, and when all data judged in pending data or partial data are not resolved, treat deal with data again to resolve, both avoided and repeated resolution had been carried out to parsed data, reduce the operating load of user side, improve the displaying efficiency to user's data query simultaneously.
Embodiment according to a forth aspect of the invention, also proposed a kind of processing unit of large data, is applicable to server, comprises: the first receiving element, for receiving the request of data that each user side sends; Transmitting element, for according to described request of data, to the pending data that described each client feeds back is corresponding, for described each user side, the described pending data received are resolved, obtain the data after processing, and by the data back after described process to described server; Second receiving element, for receive the passback of described each user side described process after data; Updating block, for carrying out Data Update according to the data after described process.
The processing unit of large according to an embodiment of the invention data, by receiving the request of data that each user side sends, and according to request of data to pending data corresponding to each client feeds back, and the data received after the process of each user side passback, to carry out Data Update according to the data after process, achieve and the mass data stored in server is distributed to each user side, processed by the data of each user side to demand, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing unit of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, described transmitting element specifically for: according to default feedback strategy, to the pending data that described each client feeds back is corresponding; Wherein, default feedback strategy comprises the data category of feedback and/or the data bulk of feedback.
The processing unit of large according to an embodiment of the invention data, when receiving the request of data of each user side, server can according to default feedback strategy to pending data corresponding to each client feeds back, such as, in advance the pending data in server are divided into personal data, alive data and other data etc., server is when receiving the request instruction of user side, first personal data corresponding for this user side are fed back to this user side to resolve, from alive data, randomly draw a certain amount of (such as 40) data feedback again resolve to user side, personal data or alive data all resolved or countless according to time, from other data, randomly draw a certain amount of (such as 40) data feedback resolve to user side.Wherein, default feedback strategy includes but not limited to the data category of feedback, the data bulk of feedback, and the data category of feedback can be distinguished from multiple dimension.
According to one embodiment of present invention, also comprise: the 3rd receiving element, for before the request of data of the described each user side transmission of described first receiving element reception, receive the running state data that home appliance is uploaded; Memory cell, for storing described running state data, using by described running state data as described pending data.
The processing unit of large according to an embodiment of the invention data, server can receive the running state data that magnanimity home appliance is uploaded, and only store these running state data and carry out, to treat that user side carries out dissection process, self does not carry out dissection process to it, thus realizes magnanimity home appliance and processed by mass users end.
Embodiment according to a fifth aspect of the invention, also proposed a kind of user side, comprising: the processing unit of the large data according to any one of above-described embodiment.
Embodiment according to a sixth aspect of the invention, also proposed a kind of server, comprising: the processing unit of the large data according to any one of above-described embodiment.
Embodiment according to a seventh aspect of the invention, also proposed a kind of system, comprise: the user side as described in above-described embodiment, as described in server and home appliance, wherein, described server is for receiving and storing the running state data that described home appliance uploads.
Additional aspect of the present invention and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or additional aspect of the present invention and advantage will become obvious and easy understand from accompanying drawing below combining to the description of embodiment, wherein:
Fig. 1 shows the schematic flow diagram of the processing method of large data according to an embodiment of the invention;
Fig. 2 shows the schematic flow diagram of the processing method of large data according to another embodiment of the invention;
Fig. 3 shows the schematic block diagram of the processing unit of large data according to an embodiment of the invention;
Fig. 4 shows the schematic block diagram of the processing unit of large data according to another embodiment of the invention;
Fig. 5 shows the schematic block diagram of user side according to an embodiment of the invention;
Fig. 6 shows the schematic block diagram of the server according to the embodiment of the present invention;
Fig. 7 shows the schematic block diagram of the system according to the embodiment of the present invention;
Fig. 8 shows the process chart of data according to an embodiment of the invention;
Fig. 9 shows according to the interaction diagrams between the browser of the embodiment of the present invention and server;
Figure 10 shows the process chart of the server-assignment program according to the embodiment of the present invention.
Embodiment
In order to more clearly understand above-mentioned purpose of the present invention, feature and advantage, below in conjunction with the drawings and specific embodiments, the present invention is further described in detail.It should be noted that, when not conflicting, the feature in the embodiment of the application and embodiment can combine mutually.
Set forth a lot of detail in the following description so that fully understand the present invention; but; the present invention can also adopt other to be different from other modes described here and implement, and therefore, protection scope of the present invention is not by the restriction of following public specific embodiment.
Fig. 1 shows the schematic flow diagram of the processing method of large data according to an embodiment of the invention.
As shown in Figure 1, the processing method of large data according to an embodiment of the invention, is applicable to user side, comprises:
Step 102, sends request of data to server, for the pending data that described server is corresponding according to described request of data feedback;
Step 104, receives described pending data, and resolves described pending data, with the data after generating process;
Step 106, by the data back after described process to described server.
User side is by sending request of data to server, and the pending data of reception server feedback, and treat deal with data and resolve, with the data after generating process, by the data back after process to server, the data of server according to the actual requirements after stores processor, further carry out Data Update according to the data after process, achieve the mass data by storing in user side processing server, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing method of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, send the step of request of data to server, specifically comprising: when detecting that described user side is opened, sending described request of data to described server; And/or when detecting that the parsing of the target in described user side thread is triggered, send described request of data to described server; And/or when the query manipulation of user being detected, send described request of data to described server.
User side can send request of data to server in real time according to demand, particularly, when can be arranged on user side unlatching, request of data is sent to server, when the parsing thread for resolution data that also can be arranged on user installation is triggered, request of data is sent to server, when also can be arranged on query manipulation user being detected, such as, the clicking operation etc. of the inquire button on user to user end interface detected, request of data is sent to server, user side is sent under what conditions to the setting of request of data to server, can be arranged by the manufacturer of user side, also can come self-defined by user.
According to one embodiment of present invention, before described pending data are resolved, also comprising: when the query manipulation of user being detected, sending described request of data to described server, and receive the described pending data that described server feeds back according to described request of data; Judge that whether all data in described pending data are resolved; When judging that all data in described pending data are resolved, all data in described pending data being shown, checks for described user; When all data judged in described pending data or partial data are not resolved, perform the step that described pending data are resolved.
When user inquires about certain data by user side, these data stored in possible server are by user side dissection process mistake, now by judging that whether all data in pending data are resolved, and when judging that all data in pending data are resolved, the all data treated in deal with data are shown, and when all data judged in pending data or partial data are not resolved, treat deal with data again to resolve, both avoided and repeated resolution had been carried out to parsed data, reduce the operating load of user side, improve the displaying efficiency to user's data query simultaneously.
Fig. 2 shows the schematic flow diagram of the processing method of large data according to another embodiment of the invention.
As shown in Figure 2, the processing method of large data according to another embodiment of the invention, is applicable to server, comprises:
Step 202, receives the request of data that each user side sends;
Step 204, according to described request of data, to the pending data that described each client feeds back is corresponding, for described each user side, the described pending data received are resolved, obtain the data after processing, and by the data back after described process to described server;
Step 206, receives the data after the described process of described each user side passback;
Step 208, carries out Data Update according to the data after described process.
By receiving the request of data that each user side sends, and according to request of data to pending data corresponding to each client feeds back, and the data received after the process of each user side passback, to carry out Data Update according to the data after process, achieve and the mass data stored in server is distributed to each user side, processed by the data of each user side to demand, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing method of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, according to described request of data, to the step of pending data corresponding to described each client feeds back, specifically comprise: according to default feedback strategy, to the pending data that described each client feeds back is corresponding; Wherein, default feedback strategy comprises the data category of feedback and/or the data bulk of feedback.
When receiving the request of data of each user side, server can according to default feedback strategy to pending data corresponding to each client feeds back, such as, in advance the pending data in server are divided into personal data, alive data and other data etc., server is when receiving the request instruction of user side, first personal data corresponding for this user side are fed back to this user side to resolve, from alive data, randomly draw a certain amount of (such as 40) data feedback again resolve to user side, personal data or alive data all resolved or countless according to time, from other data, randomly draw a certain amount of (such as 40) data feedback resolve to user side.Wherein, default feedback strategy includes but not limited to the data category of feedback, the data bulk of feedback, and the data category of feedback can be distinguished from multiple dimension.
According to one embodiment of present invention, before the request of data receiving the transmission of each user side, also comprise: receive the running state data that home appliance is uploaded; Store described running state data, using by described running state data as described pending data.
Server can receive the running state data that magnanimity home appliance is uploaded, and only store these running state data and carry out, to treat that user side carries out dissection process, self does not carry out dissection process to it, thus realizes magnanimity home appliance and processed by mass users end.
Fig. 3 shows the schematic block diagram of the processing unit of large data according to an embodiment of the invention.
As shown in Figure 3, the processing unit 300 of large data according to an embodiment of the invention, is applicable to user side, comprises: the first transmitting element 302, receiving element 304, resolution unit 306 and the second transmitting element 308.
Wherein, the first transmitting element 302, for sending request of data to server, for the pending data that described server is corresponding according to described request of data feedback; Receiving element 304, for receiving described pending data; Resolution unit 306, for resolving described pending data, with the data after generating process; Second transmitting element 308, for by the data back after described process to described server.
User side is by sending request of data to server, and the pending data of reception server feedback, and treat deal with data and resolve, with the data after generating process, by the data back after process to server, the data of server according to the actual requirements after stores processor, further carry out Data Update according to the data after process, achieve the mass data by storing in user side processing server, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing unit 300 of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, described first transmitting element 302 specifically for: when detecting that described user side is opened, send described request of data to described server; And/or when detecting that the parsing of the target in described user side thread is triggered, send described request of data to described server; And/or when the query manipulation of user being detected, send described request of data to described server.
User side can send request of data to server in real time according to demand, particularly, when can be arranged on user side unlatching, request of data is sent to server, when the parsing thread for resolution data that also can be arranged on user installation is triggered, request of data is sent to server, when also can be arranged on query manipulation user being detected, such as, the clicking operation etc. of the inquire button on user to user end interface detected, request of data is sent to server, user side is sent under what conditions to the setting of request of data to server, can be arranged by the manufacturer of user side, also can come self-defined by user.
According to one embodiment of present invention, also comprise: judging unit 310, for the query manipulation of user being detected when described first transmitting element 302, described request of data is sent to described server, and described receiving element 304 is when receiving the described pending data that described server feeds back according to described request of data, judge that whether all data in described pending data are resolved; Display unit 312, during for judging that at described judging unit 310 all data in described pending data are resolved, showing all data in described pending data, checking for described user; Described resolution unit 306 specifically for, when described judging unit 310 judges that all data in described pending data or partial data are not resolved, described pending data are resolved.
When user inquires about certain data by user side, these data stored in possible server are by user side dissection process mistake, now by judging that whether all data in pending data are resolved, and when judging that all data in pending data are resolved, the all data treated in deal with data are shown, and when all data judged in pending data or partial data are not resolved, treat deal with data again to resolve, both avoided and repeated resolution had been carried out to parsed data, reduce the operating load of user side, improve the displaying efficiency to user's data query simultaneously.
Fig. 4 shows the schematic block diagram of the processing unit of large data according to another embodiment of the invention.
As shown in Figure 4, the processing unit 400 of large data according to another embodiment of the invention, is applicable to server, comprises: the first receiving element 402, transmitting element 404, second receiving element 406 and updating block 408.
First receiving element 402, for receiving the request of data that each user side sends; Transmitting element 404, for according to described request of data, to the pending data that described each client feeds back is corresponding, for described each user side, the described pending data received are resolved, obtain the data after processing, and by the data back after described process to described server; Second receiving element 406, for receive the passback of described each user side described process after data; Updating block 408, for carrying out Data Update according to the data after described process.
By receiving the request of data that each user side sends, and according to request of data to pending data corresponding to each client feeds back, and the data received after the process of each user side passback, to carry out Data Update according to the data after process, achieve and the mass data stored in server is distributed to each user side, processed by the data of each user side to demand, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique totally process mass data and cause the problem of the wasting of resources.
The processing unit 400 of large data according to the abovementioned embodiments of the present invention, can also have following technical characteristic:
According to one embodiment of present invention, described transmitting element 404 specifically for: according to default feedback strategy, to the pending data that described each client feeds back is corresponding; Wherein, default feedback strategy comprises the data category of feedback and/or the data bulk of feedback.
When receiving the request of data of each user side, server can according to default feedback strategy to pending data corresponding to each client feeds back, such as, in advance the pending data in server are divided into personal data, alive data and other data etc., server is when receiving the request instruction of user side, first personal data corresponding for this user side are fed back to this user side to resolve, from alive data, randomly draw a certain amount of (such as 40) data feedback again resolve to user side, personal data or alive data all resolved or countless according to time, from other data, randomly draw a certain amount of (such as 40) data feedback resolve to user side.Wherein, default feedback strategy includes but not limited to the data category of feedback, the data bulk of feedback, and the data category of feedback can be distinguished from multiple dimension.
According to one embodiment of present invention, also comprise: the 3rd receiving element 410, for before the request of data of the described each user side transmission of described first receiving element reception, receive the running state data that home appliance is uploaded; Memory cell 412, for storing described running state data, using by described running state data as described pending data.
Server can receive the running state data that magnanimity home appliance is uploaded, and only store these running state data and carry out, to treat that user side carries out dissection process, self does not carry out dissection process to it, thus realizes magnanimity home appliance and processed by mass users end.
Fig. 5 shows the schematic block diagram of user side according to an embodiment of the invention.
As shown in Figure 5, user side 500 according to an embodiment of the invention, comprising: the processing unit 300 of large data as shown in Figure 3.
Fig. 6 shows the schematic block diagram of the server according to the embodiment of the present invention.
As shown in Figure 6, server 600 according to an embodiment of the invention, comprising: the processing unit 400 of large data as shown in Figure 4.
Fig. 7 shows the schematic block diagram of the system according to the embodiment of the present invention.
As shown in Figure 7, system 700 according to an embodiment of the invention, comprise: user side 500 as shown in Figure 5, the server 600 shown in Fig. 6 and home appliance 702, wherein, described server 600 is for receiving and storing the running state data that described home appliance 702 uploads.
Below in conjunction with Fig. 8 to Figure 10, technical scheme of the present invention is described further.
In the present embodiment, be described using refrigerator as home appliance, refrigerator timing reports service data, and server receives reported data, but not doing any process directly stores serial ports message data; When user opens system by browser, user can select download and inquiry accelerated procedure, and inquiry accelerated procedure can start with starting operating system, namely opens one and resolve thread after startup, if JS resolves thread; If user does not download " inquiry accelerated procedure ", system will start a JS by browser and resolve thread, " JS resolves thread " is to server request data message, after server receives request, according to allocation rule, outstanding message data are returned to JS and resolve thread, JS resolves thread by the data back handled well to server afterwards; If user carries out query manipulation, JS resolves thread will stop work at present, then go to resolve data not resolved in current queries result, after being parsed, analysis result will be showed user, also analysis result be returned to server simultaneously.
Particularly, as shown in Figure 8, this data handling system comprises browser, querying server, database, reports server and refrigerator, wherein, querying server, database and report server can be integrated into same server, concrete flow chart of data processing comprises:
S802: service data is reported server by refrigerator.Such as, the data that refrigerator reports are serial ports instructions of 16 systems, be exactly a complete refrigerator running state data as follows: " aa, 2f, ca, e5, 00, 00, 43, 00, 00, 04, 01, 01, 03, 10, 05, 00, 00, 00, 00, 00, 28, 28, 28, 28, 9b, 28, 64e, 6e, 6e, 6e, 64, 00, 00, 00, 00, 00, 00, 00, 02, 01, 00, 00, 00, 00, 00, 00, db ", temperature is contained inside this instruction, humidity, more than 100 information such as load switch, suppose have 500,000 refrigerators to report primary information every 5 minutes, to be huge to the pressure of server, therefore, server process program in the present embodiment avoids parsing computing.
S804, report server directly database to be stored in the serial ports instruction reported, distribute a unique id to this record simultaneously.
When S806, user open " inquiry system " by browser, user can select download and inquiry accelerated procedure.
S808, inquiry accelerated procedure can start with starting operating system, namely open a JS and resolve thread after startup; If user does not have download and inquiry accelerated procedure, system will start a JS by browser and resolve thread.
S810, JS resolve thread and start backward querying server request msg.
S812, Query Database.
The data that S814, parsing receive.
S816, be parsed after, analysis result is submitted to querying server store.
S818, more new database.
S820, user's data query.
S822, submit Query condition.
S824, Query Database, return Query Result.
If there are not resolved data in S826 Query Result, JS resolves thread and will stop current work, the data that the preferential user of parsing needs.
S828, by resolve after data, show user, if be all the data of having resolved, then directly show user, also namely show Query Result.
S830, whole analysis results carried be back to querying server, to upgrade database.
As shown in Figure 9, the reciprocal process of browser and server, specifically comprises:
1, after user opens inquiry system, prompting user whether download and inquiry accelerated procedure, if user does not select download and inquiry accelerated procedure, then start JS by browser acquiescence and resolve thread, this thread is initiatively to server request data, " server-assignment program " returns data to be resolved, after JS parsing thread receives data, perform parsing work, and the result of parsing is returned to server, analysis result example is as follows: { " temperature of refrigerating chamber ": 2, " freezer temperature " :-15, " press gear ": 2, " refrigerating chamber Fahrenheit respective notch ": 3}, 100 multinomial analysis results are comprised at present in analysis result, follow-up easily extensible,
If 2, user has downloaded inquiry accelerated procedure, this inquiry accelerated procedure can start with computer booting, from server request data, and can perform the work of resolving always, but only take a small amount of computer resource, not affect Consumer's Experience;
If 3 users have submitted inquiry service, and inquiry returns results the data that middle existence is not resolved, JS resolves thread will stop current parsing work at once, data not resolved in preferential parsing Query Result, to be resolved complete after, then continue previous work.
As shown in Figure 10, the specific works of server-assignment program comprises:
1), namely client (also user side) request analysis data, preferentially in data pool (i.e. database), inquire about the personal data of this client, if there are personal data, return result data to be resolved to client;
2) if step 1) in there are not the personal data of this client, then from " alive data ", by Hash method, take out 40 data to be resolved and return to client at random;
3) if 2) in still there are not data, then from " other data ", by Hash method, random take out 40 data to be resolved and return to client;
4) if 3) in still there are not data, returning sky result to client;
Above-mentioned " alive data " refers to: according to the liveness of user's inquiry, such as, the data of the user be queried in current 30 days, are classified as it " alive data ";
Above-mentioned " other data " refer to: except other data except " personal data " and " alive data ";
In the above-described embodiments, client adopts HTML5 and JS technology to realize, and also can use Python, lua, C++, C# etc. realize same client functionality; Inquiry accelerated procedure, with starting operating system self-starting, also can manually boot, or configuration setup rule; What resolve is the serial ports instruction of 16 systems, also can be other forms of machine instruction;
Data are divided into by server-assignment program: " personal data ", " alive data ", " other data ", also from other dimensions by Data classification, such as: by date, can press area etc.
By above-described embodiment, achieve the data that magnanimity equipment produces, processed by the client of magnanimity, saved server hardware resource; Improve server concurrent processing ability; Achieve client real-time query from mass data simultaneously and go out result.
More than be described with reference to the accompanying drawings technical scheme of the present invention, the present invention proposes a kind of processing scheme of large data newly, can by the mass data stored in user side processing server, make server self without the need to processing mass data, thus saved server hardware resource, improve the concurrent processing ability of server, according to demand the mass data stored in server is processed by client simultaneously, avoid server in correlation technique and mass data is processed totally and causes the problem of the wasting of resources.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (12)

1. a processing method for large data, is applicable to user side, it is characterized in that, comprising:
Request of data is sent, for the pending data that described server is corresponding according to described request of data feedback to server;
Receive described pending data, and described pending data are resolved, with the data after generating process;
By the data back after described process to described server.
2. the processing method of large data according to claim 1, is characterized in that, sends the step of request of data, specifically comprise to server:
When detecting that described user side is opened, send described request of data to described server; And/or
When detecting that the parsing of the target in described user side thread is triggered, send described request of data to described server; And/or
When the query manipulation of user being detected, send described request of data to described server.
3. the processing method of large data according to claim 1 and 2, is characterized in that, before resolving described pending data, also comprises:
When the query manipulation of user being detected, send described request of data to described server, and receive the described pending data that described server feeds back according to described request of data;
Judge that whether all data in described pending data are resolved;
When judging that all data in described pending data are resolved, all data in described pending data being shown, checks for described user;
When all data judged in described pending data or partial data are not resolved, perform the step that described pending data are resolved.
4. a processing method for large data, is applicable to server, it is characterized in that, comprising:
Receive the request of data that each user side sends;
According to described request of data, to the pending data that described each client feeds back is corresponding, for described each user side, the described pending data received are resolved, obtain the data after processing, and by the data back after described process to described server;
Receive the data after the described process of described each user side passback;
Data Update is carried out according to the data after described process.
5. the processing method of large data according to claim 4, is characterized in that, according to described request of data, to the step of pending data corresponding to described each client feeds back, specifically comprises:
According to default feedback strategy, to the pending data that described each client feeds back is corresponding;
Wherein, default feedback strategy comprises the data category of feedback and/or the data bulk of feedback.
6. the processing method of the large data according to claim 4 or 5, is characterized in that, before the request of data receiving the transmission of each user side, also comprises:
Receive the running state data that home appliance is uploaded;
Store described running state data, using by described running state data as described pending data.
7. a processing unit for large data, is applicable to user side, it is characterized in that, comprising:
First transmitting element, for sending request of data to server, for the pending data that described server is corresponding according to described request of data feedback;
Receiving element, for receiving described pending data;
Resolution unit, for resolving described pending data, with the data after generating process;
Second transmitting element, for by the data back after described process to described server.
8. the processing unit of large data according to claim 7, is characterized in that, described first transmitting element specifically for:
When detecting that described user side is opened, send described request of data to described server; And/or
When detecting that the parsing of the target in described user side thread is triggered, send described request of data to described server; And/or
When the query manipulation of user being detected, send described request of data to described server.
9. the processing unit of the large data according to claim 7 or 8, is characterized in that, also comprise:
Judging unit, for the query manipulation of user being detected when described first transmitting element, described request of data is sent to described server, and described receiving element is when receiving the described pending data that described server feeds back according to described request of data, judge that whether all data in described pending data are resolved;
Display unit, time resolved for all data in the described pending data of described judging unit judgement, show all data in described pending data, checks for described user;
Described resolution unit specifically for, when described judging unit judges that all data in described pending data or partial data are not resolved, described pending data are resolved.
10. a processing unit for large data, is applicable to server, it is characterized in that, comprising:
First receiving element, for receiving the request of data that each user side sends;
Transmitting element, for according to described request of data, to the pending data that described each client feeds back is corresponding, for described each user side, the described pending data received are resolved, obtain the data after processing, and by the data back after described process to described server;
Second receiving element, for receive the passback of described each user side described process after data;
Updating block, for carrying out Data Update according to the data after described process.
The processing unit of 11. large data according to claim 10, is characterized in that, described transmitting element specifically for:
According to default feedback strategy, to the pending data that described each client feeds back is corresponding;
Wherein, default feedback strategy comprises the data category of feedback and/or the data bulk of feedback.
The processing unit of 12. large data according to claim 10 or 11, is characterized in that, also comprise:
3rd receiving element, for before the request of data of the described each user side transmission of described first receiving element reception, receives the running state data that home appliance is uploaded;
Memory cell, for storing described running state data, using by described running state data as described pending data.
CN201511008580.XA 2015-12-25 2015-12-25 Big data processing method and device Active CN105577804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511008580.XA CN105577804B (en) 2015-12-25 2015-12-25 Big data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511008580.XA CN105577804B (en) 2015-12-25 2015-12-25 Big data processing method and device

Publications (2)

Publication Number Publication Date
CN105577804A true CN105577804A (en) 2016-05-11
CN105577804B CN105577804B (en) 2019-07-09

Family

ID=55887448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511008580.XA Active CN105577804B (en) 2015-12-25 2015-12-25 Big data processing method and device

Country Status (1)

Country Link
CN (1) CN105577804B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108063746A (en) * 2016-11-08 2018-05-22 北京国双科技有限公司 Processing method, client, server and the system of data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226226A1 (en) * 2006-03-23 2007-09-27 Elta Systems Ltd. Method and system for distributing processing of computerized tasks
CN101127784A (en) * 2007-09-29 2008-02-20 网秦无限(北京)科技有限公司 Method and system for quickly obtaining network information service at mobile terminal
CN101741653A (en) * 2008-11-21 2010-06-16 爱思开电讯投资(中国)有限公司 Client server, intelligent terminal, online game system and method
CN101754430A (en) * 2009-12-31 2010-06-23 魏新成 System and method for dial-up networking via telephone website
CN102238223A (en) * 2010-05-06 2011-11-09 清华大学 Networked personal data management method for mobile device
CN102354178A (en) * 2011-08-02 2012-02-15 常州节安得能源科技有限公司 Energy efficiency monitoring system
CN104092770A (en) * 2014-07-22 2014-10-08 中国电建集团华东勘测设计研究院有限公司 Inner-enterprise address book management method and system based on cloud computing
CN104717286A (en) * 2015-03-03 2015-06-17 百度在线网络技术(北京)有限公司 Data processing method, terminal, server and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226226A1 (en) * 2006-03-23 2007-09-27 Elta Systems Ltd. Method and system for distributing processing of computerized tasks
CN101127784A (en) * 2007-09-29 2008-02-20 网秦无限(北京)科技有限公司 Method and system for quickly obtaining network information service at mobile terminal
CN101741653A (en) * 2008-11-21 2010-06-16 爱思开电讯投资(中国)有限公司 Client server, intelligent terminal, online game system and method
CN101754430A (en) * 2009-12-31 2010-06-23 魏新成 System and method for dial-up networking via telephone website
CN102238223A (en) * 2010-05-06 2011-11-09 清华大学 Networked personal data management method for mobile device
CN102354178A (en) * 2011-08-02 2012-02-15 常州节安得能源科技有限公司 Energy efficiency monitoring system
CN104092770A (en) * 2014-07-22 2014-10-08 中国电建集团华东勘测设计研究院有限公司 Inner-enterprise address book management method and system based on cloud computing
CN104717286A (en) * 2015-03-03 2015-06-17 百度在线网络技术(北京)有限公司 Data processing method, terminal, server and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108063746A (en) * 2016-11-08 2018-05-22 北京国双科技有限公司 Processing method, client, server and the system of data
CN108063746B (en) * 2016-11-08 2020-05-15 北京国双科技有限公司 Data processing method, client, server and system

Also Published As

Publication number Publication date
CN105577804B (en) 2019-07-09

Similar Documents

Publication Publication Date Title
US11755371B1 (en) Data intake and query system with distributed data acquisition, indexing and search
CN108848142B (en) Message pushing method and device, computer equipment and storage medium
CN107277029B (en) Remote procedure call method and device and computer equipment
US9009480B1 (en) Techniques for handshake-free encrypted communication using public key bootstrapping
CN110708247B (en) Message routing method, message routing device, computer equipment and storage medium
CN109766253B (en) Performance data sending method and device, computer equipment and storage medium
CN111694857B (en) Method, device, electronic equipment and computer readable medium for storing resource data
US20170257449A1 (en) Method for forwarding traffic in application on mobile intelligent terminal
EP3720094A1 (en) Information processing method, apparatus, device and system
EP3174267A1 (en) Interaction pattern for a mobile telecommunication device
CN111209310A (en) Service data processing method and device based on stream computing and computer equipment
CN113940037B (en) Resource subscription method, device, computer equipment and storage medium
US20140237351A1 (en) Application program control
CN105516086A (en) Service processing method and apparatus
CN113590433B (en) Data management method, data management system, and computer-readable storage medium
US20100088310A1 (en) Method And System For Automating Data Queries During Discontinuous Communications
CN104239125A (en) Object processing method, distributive file system and client device
CN105577804A (en) Big data processing method and processing device
US11343318B2 (en) Configurable internet of things communications system
CN107479985B (en) Remote procedure call method and device and computer equipment
CN112087335A (en) Flow experiment method, device and storage medium
CN118394279B (en) Data processing method, device, storage medium and computer program product based on interceptor
CN113221039A (en) Page display method and device, computer equipment and storage medium
JP5329589B2 (en) Transaction processing system and operation method of transaction processing system
CN110457614B (en) Data increment updating method and device for reducing data concurrency and computer equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180207

Address after: 230088 Building No. 198, building No. 198, Mingzhu Avenue, Anhui high tech Zone, Anhui

Applicant after: Hefei Midea Intelligent Technology Co., Ltd.

Address before: 230601 Hefei economic and Technological Development Zone, Fairview Road, Anhui

Applicant before: Hefei Hualing Co., Ltd.

Applicant before: Midea Group Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant