CN118694764A - Cloud storage system and cloud storage management method - Google Patents
Cloud storage system and cloud storage management method Download PDFInfo
- Publication number
- CN118694764A CN118694764A CN202411177403.3A CN202411177403A CN118694764A CN 118694764 A CN118694764 A CN 118694764A CN 202411177403 A CN202411177403 A CN 202411177403A CN 118694764 A CN118694764 A CN 118694764A
- Authority
- CN
- China
- Prior art keywords
- pool
- target
- data
- node
- resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007726 management method Methods 0.000 title claims abstract description 157
- 238000004891 communication Methods 0.000 claims abstract description 32
- 230000003993 interaction Effects 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims description 26
- 230000004044 response Effects 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 14
- 238000011084 recovery Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 9
- 239000012634 fragment Substances 0.000 claims description 4
- 230000008447 perception Effects 0.000 claims description 2
- 238000013523 data management Methods 0.000 claims 1
- 238000010276 construction Methods 0.000 abstract description 34
- 238000010586 diagram Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000010365 information processing Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a cloud storage system and a cloud storage management method, wherein the cloud storage system comprises: the system comprises a plurality of resource pools, a service management platform and a storage management platform, wherein the resource pools comprise a plurality of data nodes and are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored into the resource pools logically bound with the front-end sensing equipment, and no flow interaction exists among different resource pools; the service management platform is respectively connected with each resource pool in a communication way and is at least used for responding to service requests; the storage management platform is respectively in communication connection with each resource pool and is at least used for respectively carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, wherein the topology configuration file is defined with each resource pool and data nodes contained in the resource pools. According to the scheme, on the premise of reducing the construction difficulty and the construction cost of the cloud storage system, the load pressure of the core trunk network can be reduced as much as possible, and the convenience of system management is improved.
Description
Technical Field
The application relates to the technical field of distributed storage, in particular to a cloud storage system and a cloud storage management method.
Background
In order to effectively store mass data of the front-end sensing device, a set of storage systems is generally built in a centralized manner to uniformly store the mass data. However, this approach is relatively difficult and costly.
Based on this, the industry adopts a distributed multi-area construction method, and the management entrance is unified by a multi-domain mode. However, in a multi-domain scene, each domain realizes the device access and data storage of the front-end sensing devices, so that a data island is easy to form, the data cross-domain sharing is inconvenient, huge expenditure is brought to the aggregation of the data in different domains, and the flow pressure and the larger communication delay of a core trunk network are easy to cause. In addition, although there is a unified service portal, because each domain needs to be operated and managed separately, the flow is complex, and the real global unified management, unified load and unified storage cannot be achieved. In view of this, how to reduce the load pressure of the core trunk network as much as possible and improve the convenience of system management on the premise of reducing the construction difficulty and the construction cost of the cloud storage system is a problem to be solved.
Disclosure of Invention
The technical problem to be solved by the application is to provide the cloud storage system and the cloud storage management method, which can reduce the load pressure of the core trunk network as much as possible and improve the convenience of system management on the premise of reducing the construction difficulty and the construction cost of the cloud storage system.
In order to solve the above problems, a first aspect of the present application provides a cloud storage system, including: the system comprises a plurality of resource pools, a service management platform and a storage management platform, wherein the resource pools comprise a plurality of data nodes, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored into the resource pools logically bound with the front-end sensing equipment, and no flow interaction exists among different resource pools; the service management platform is respectively in communication connection with each resource pool and is at least used for responding to service requests, and the service requests comprise at least one of in-pool access and in-pool forwarding; the storage management platform is respectively in communication connection with each resource pool and is at least used for carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, wherein the topology configuration file is defined with each resource pool and data nodes contained in the resource pools.
In order to solve the above problems, a second aspect of the present application provides a cloud storage management method, including: obtaining a topology configuration file and establishing communication connection with each resource pool in a cloud storage system; the topology configuration file defines each resource pool and data nodes contained in the resource pools, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, no traffic interaction exists among different resource pools, the cloud storage system further comprises a service management platform which is in communication connection with each resource pool and is used for receiving and responding to service requests, and the service requests comprise at least one of in-pool access and in-pool forwarding; and respectively carrying out load balancing and scheduling management in each resource pool based on the topology configuration file.
In order to solve the above problems, a third aspect of the present application provides a cloud storage management method, including: establishing communication connection with each resource pool in the cloud storage system; the cloud storage system further comprises a storage management platform which is in communication connection with each resource pool and is used for carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, wherein the topology configuration file is defined by each resource pool and the data nodes contained in the resource pools; receiving and responding to the service request; wherein the service request includes at least one of intra-pool access, intra-pool forwarding.
According to the scheme, the cloud storage system comprises a plurality of resource pools, a service management platform and a storage management platform, the resource pools comprise a plurality of data nodes, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment are stored in the resource pools logically bound with the front-end sensing equipment, no flow interaction exists among different resource pools, the service management platform is respectively in communication connection with each resource pool and is at least used for responding to a service request, the service request comprises at least one of in-pool access and in-pool forwarding, the storage management platform is respectively in communication connection with each resource pool and is at least used for carrying out load balancing and scheduling management respectively in each resource pool based on a topology configuration file, and the topology configuration file defines each resource pool and the data nodes contained in the resource pools, so that on one hand, compared with the centralized construction of a plurality of resource pools, the construction cost of the cloud storage system is reduced, on the other hand, services such as storage and access and forwarding are realized in the resource pools, no flow interaction among the resource pools is avoided, the core dry network is avoided as much as possible, the flows such as storage and the service are in the core dry network, the core network is reduced, and the service management is managed by the unified and the multiple resource management platform is realized. Therefore, the load pressure of the core trunk network can be reduced as much as possible on the premise of reducing the construction difficulty and the construction cost of the cloud storage system, and the convenience of system management is improved.
Drawings
FIG. 1 is a schematic diagram of a framework of one embodiment of a cloud storage system of the present application;
FIG. 2 is a schematic diagram of one embodiment of a topology configuration file;
FIG. 3 is a flow chart of an embodiment of a cloud storage management method according to the present application;
FIG. 4 is a flow chart of another embodiment of the cloud storage management method of the present application;
FIG. 5 is a schematic diagram of a frame of an embodiment of an electronic device of the present application;
FIG. 6 is a schematic diagram of a frame of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic diagram of a cloud storage system according to an embodiment of the application. As shown in fig. 1, the cloud storage system 10 of the present application may include a plurality of resource pools 11, a service management platform 12, and a storage management platform 13, where the resource pools 11 include a plurality of data nodes 111, the resource pools 11 are used for logically binding with a front-end sensing device 20, output data of the front-end sensing device 20 is stored in the resource pools 11 logically binding with the front-end sensing device 20, and no traffic interaction exists between different resource pools 11. The service management platform 12 is respectively communicatively connected to the resource pools 11, and is at least configured to respond to service requests, where the service requests include at least one of intra-pool access and intra-pool forwarding. The storage management platform 13 is respectively communicatively connected to the resource pools 11, and is at least used for performing load balancing and scheduling management in each resource pool 11 based on a topology configuration file, where the topology configuration file defines the resource pools 11 and the data nodes 111 included in the resource pools 11. That is, load balancing and scheduling management may both be done within the pool. It should be noted that, in the cloud architecture, one domain is a PaaS (Platform AS A SERVICE, platform serving) cluster with perfect functions, and has capabilities of front-end IoT (Internet of Things ) device access, data storage, transmission, analysis, sharing, and the like, and provides interfaces to provide various service capabilities based on data for an upper service Platform. Further, as one possible example, the service management platform 12 may specifically be a SaaS (Software AS A SERVICE ) management platform.
In one implementation scenario, individual data nodes 111 within the same resource pool 11 may be located in the same geographic location, and data nodes 111 belonging to different resource pools 11 are located in different geographic locations. That is, the different resource pools 11 are constructed in a dispersed manner. It should be noted that, one resource pool 11 may be a data center, a batch of racks, or several cloud direct storage nodes. The hardware architecture of the resource pool 11 is not limited herein. Therefore, the method can realize the pool division management of different hardware types (different configurations, different models and different operating systems), realize the compatible scenes of isomorphism of hardware in a pool and isomerism among pools, further support the multi-stage capacity expansion construction of isomerism hardware in the same system and help solve the industry problem of isomerism capacity expansion.
In a specific implementation scenario, the same geographic locations as described above may refer to the same machine room, as one possible example. That is, several data nodes 111 may be built in the same machine room as one resource pool 11.
In another specific implementation scenario, the same geographic locations previously described may refer to the same data center, as another possible example. That is, several data nodes 111 may be built in the same data center as one resource pool 11.
In yet another specific implementation scenario, as yet another possible example, the same geographic location as described above may refer to the same administrative area (e.g., district, city, etc.). That is, several data nodes 111 may be built in the same administrative area as one resource pool 11.
It should be noted that, as described above, the specific range of the geographic location may be set according to practical situations, for example, may be set as a machine room, a data center, an administrative area, etc., and the specific range of the geographic location is not limited herein.
In one implementation scenario, as a preferred example, to implement load balancing within the resource pool 11, the resource pool 11 may include a plurality of data nodes 111 therein, where the total number of data nodes 111 within the resource pool 11 is not limited.
In one implementation scenario, front-end aware device 20 may be used to collect data such as an environment, and the data formats may include, but are not limited to: video, images, structured data, etc. Taking the front-end aware device 20 as an example to collect structured data, the front-end aware device 20 may specifically include, but is not limited to: humidity sensors, temperature sensors, wind sensors, etc., for collecting and forming structured data containing environmental indicators (e.g., humidity, temperature, wind, etc.) at various times. Of course, the above examples are only a few possible examples of practical application processes, and the front-end aware device 20 may also include other kinds of devices, and the specific kind of the front-end aware device 20 is not limited herein.
In one implementation scenario, as previously described, a topology profile may be defined with each resource pool 11 and the data nodes 111 contained in the resource pools, that is, the topology profile may reflect the topology relationship of the plurality of resource pools 11 in the cloud storage system 10. Referring to fig. 2 in combination, fig. 2 is a schematic diagram of an embodiment of a topology configuration file. As shown in fig. 2, the cloud storage system 10 defined by the topology configuration file shown in fig. 2 includes two resource pools 11, wherein a unique identifier (i.e., pool_id) of one resource pool 11 is 148618787703220001, and a unique identifier (i.e., pool_id) of the other resource pool 11 is 148618787703220002. In addition, pool_type is used to identify the pool category of the resource pool 11. As one possible example, the pool category may reflect the type of data (e.g., video, image, structured data, etc.) that the resource pool 11 is responsible for accessing. Of course, the pool category may reflect other metrics, and is not limited herein. pool_name is used to identify the name of the resource pool 11. datanode _list is used to reflect the data nodes contained in the resource pool 11. As a possible example, as shown in fig. 2, the IP address of the data node may be used to represent the data node, i.e. datanode _list may specifically contain the IP address of each data node in the resource pool 11. It should be noted that, the topology configuration file shown in fig. 2 is only one possible example, and the topology configuration file is not limited to reflect the specific form of each resource pool and the data nodes contained therein, and the topology configuration file is not exemplified here.
In one implementation scenario, the service management platform 12 may specifically respond to a service request as an in-pool access, obtain, based on the physical location of the front-end sensing device 20, the relative distances between the front-end sensing device 20 and each of the resource pools 11, and select, based on the relative distances between the front-end sensing device 20 and each of the resource pools 11, a logical binding between one of the resource pools 11 and the front-end sensing device 20. In the above manner, by measuring the relative distances between the front-end sensing device 20 and each resource pool 11, so as to select one resource pool 11 to be logically bound with the front-end sensing device 20, the front-end sensing device 20 can access the resource pool 11 nearby as much as possible, and further the output data can be loaded nearby and distributed nearby as much as possible.
In a specific implementation scenario, as a possible example, as mentioned above, the topology configuration file may include configuration information such as the name of each resource pool 11, and then the relative distance between the front-end sensing device 20 and each resource pool 11 may be determined according to the physical location of the front-end sensing device 20 and the configuration information of each resource pool 11. Illustratively, taking the physical location of the front-end aware device 20 as "a city B district C intersection" as an example, each resource pool 11 defined in the topology configuration file includes: if the pool 11 with the pool_name "a city", the pool 11 with the pool_name "D city", and the pool 11 with the pool_name "E city", the relative distance between the front-end sensing device 20 and the pool 11 with the pool_name "a city" can be determined to be the nearest. Of course, the above examples are only one possible example of determining the relative distance, and other possible ways are not limited herein.
In a specific implementation scenario, after obtaining the relative distances between the front-end aware device 20 and each of the resource pools 11, the resource pool 11 corresponding to the shortest relative distance may be selected and logically bound to the front-end aware device 20.
In a specific implementation scenario, after the front-end aware device 20 successfully accesses the resource pool 11, the device identifier that the front-end aware device 20 uses to uniquely identify itself may be logically bound to the resource pool 11. Illustratively, the device identification of the front-end aware device 20 may be logically bound to the pool_id of the resource pool 11.
In one implementation scenario, the storage management platform 13 is further configured to select, after the front-end aware device 20 performs logical binding with the resource pool 11, the data node 111 as a first target node (not shown) for the front-end aware device 20 to preferentially transmit data based on load balancing of the data nodes 111 in the resource pool. That is, the first target node is essentially still the data node 111 in the resource pool 11 logically bound to the front-end aware device 20, but is not any data node 111 in the resource pool 11, but is preferentially the data node 111 that is transmitting data out. In the above manner, after the storage management platform 13 logically binds, the data node 111 that preferentially transmits the data is selected as the first target node for the front-end sensing device 20 based on load balancing, so that load balancing can be implemented in each resource pool 11 as far as possible through the same set of management platforms.
In a specific implementation scenario, with continued reference to fig. 1, the data node 111 may include a streaming media access service, and the streaming media access service in the first target node may be responsible for accessing the front-end aware device 20. It should be noted that, in the present application, numerous "services" such as a streaming media access service, a streaming media forwarding service, a streaming media storage service, an image gateway service, a distributed cloud storage service, etc., are essentially program modules for implementing different functions. Illustratively, the streaming media access service may be defined as how streaming media access is implemented, such as related protocols, signaling, etc., without limitation. Other "services" may be so similar and are not exemplified herein. Further, as a possible example, the above-mentioned streaming media related service may be a PaaS streaming media related service, for example, the streaming media access service may be a PaaS streaming media access service, the streaming media forwarding service may be a PaaS streaming media forwarding service, and the streaming media storage service may be a PaaS streaming media storage service.
In one particular implementation scenario, as previously described, the storage management platform 13 may be used for load balancing and scheduling management. Specifically, after the front-end aware device 20 is logically bound with the resource pool 11, the storage management platform 13 may determine, based on the topology configuration file, each data node 111 in the resource pool 11 logically bound with the front-end aware device 20, and obtain the load condition of the data nodes 111, so as to select, for the front-end aware device 20, the data node 111 that preferentially transmits data based on the load balancing purpose. Illustratively, taking three data nodes 111 contained in the resource pool 11 logically bound to the front-end aware device 20 as an example, they may be respectively named as "node 01", "node 02" and "node 03" for convenience of distinction, and since the load of both "node 02" and "node 03" is equal to that of "node 01", the "node 01" may be selected as the first target node for the front-end aware device 20 to transmit data preferentially. Of course, the above examples are only one possible example of the practical application process, and are not limited to other possible cases, and are not used herein.
In one implementation scenario, the storage management platform 13 is further configured to select, when a current load of a first target node corresponding to the front-end sensing device 20 does not meet a preset condition, a data node 111 other than the first target node as a second target node (not illustrated) of the front-end sensing device 20 for transparent transmission of current output data based on load balancing of each data node 111 in the resource pool 11 logically bound to the front-end sensing device 20. It should be noted that, as described above, after the first target node is logically bound with the resource pool 11 by the front-end sensing device 20, the data node 111 that transmits the data preferentially is determined, and the first target node is located in the resource pool 11 logically bound with the front-end sensing device 20, which can be referred to the above related description, and will not be described herein. In the above manner, when the current load of the first target node does not meet the preset condition, the storage management platform 13 selects the data node 111 other than the first target node as the second target node of the front end sensing device 20 for transmitting the current output data based on the load balancing of each data node 111 in the resource pool logically bound to the front end sensing device 20, so that the load balancing in the pool can be ensured as much as possible in the process of storing in the pool in each resource pool 11.
In a specific implementation scenario, the preset condition may include that the current load of the first target node is unbalanced with respect to other data nodes 111 in the resource pool 11. In particular, the preset conditions may comprise that the current load of the first target node is relatively high in the resource pool 11 in which it is located. It should be noted that the "relatively high" degree of specificity may be set according to the actual application requirement. By way of example, "relatively high" may be set at least 10% higher, at least 20% higher, etc., and the degree of specificity of "relatively high" is not limited herein.
In a specific implementation scenario, for convenience of description, taking the case that the resource pool 11 logically bound to the front-end sensing device 20 includes three data nodes 111 as an example, in the case that "node 01" is used as a first target node, if when the output data of the front-end sensing device 20 is transmitted, it is found that the current load of "node 01" in the resource pool 11 is relatively high, one of "node 02" and "node 03" with relatively low load may be selected as a second target node, so as to implement load balancing in the resource pool 11 as much as possible. Of course, the above examples are only one possible example of the practical application process, and other cases can be similar, and are not exemplified here.
In one implementation scenario, when the service request characterizes in-pool forwarding of the video playback request, the service management platform 12 may specifically forward the video playback request to a first target pool (not shown) based on a first request identifier in the video playback request, where the first target pool returns a target video corresponding to the video playback request in the first target pool in response to the video playback request. It should be noted that the first request identifier may include a first device representation, a first pool identifier, and a video capture period, where the first device identifier is used to uniquely identify the front-end aware device 20, the first pool identifier is used to uniquely identify the resource pool 11, the first target pool is the resource pool 11 that is identified by the first pool identifier, and the resource pool 11 that is uniquely identified by the first pool identifier is logically bound to the front-end aware device 20 that is uniquely identified by the first device identifier. In the above manner, when the service request is in-pool forwarding representing the video playback request, the service management platform 12 forwards the video playback request to the corresponding first target pool according to the first request identifier in the video playback request, so that the first target pool returns the requested target video accordingly, and in-pool forwarding of video data can be further realized.
In a specific implementation scenario, the first request identifier includes a device identifier of the front-end aware device 20 capturing the requested video, a pool identifier of the resource pool 11 logically bound to the front-end aware device 20 (e.g., pool_id described above), and a video capture period (e.g., D in a B-month C-a-m-E) of the requested playback video.
In a specific implementation scenario, referring to fig. 1, the data nodes 111 may at least include a streaming media forwarding service, where the service management platform 12 may forward the video playback request to the streaming media forwarding service of each data node 111 in the first target pool, and the streaming media forwarding service may query and return the target video corresponding to the video playback request. For example, the streaming media forwarding service may determine whether output data of the front end sensing device 20 corresponding to the first device identifier exists in the node according to the first device identifier, if so, it continues to determine whether video data located in the video acquisition period exists in the data, and if so, the streaming media forwarding service may return as the target video. In the above manner, the data nodes 111 at least include a streaming media forwarding service, the service management platform 12 forwards the video playback request to the streaming media forwarding service of each data node 111 in the first target pool, and the streaming media forwarding service is used for querying and returning the target video corresponding to the video playback request, so that the forwarding of the video data in the pool can be realized through the streaming media forwarding service.
In one implementation scenario, when the service request is forwarded in a pool for characterizing an image retrieval request, the service management platform 12 may specifically be configured to forward the image retrieval request to a second target pool (not shown) based on a second request identifier in the image retrieval request, where the second target pool returns a corresponding target image in response to the image retrieval request, and the second request identifier includes at least a second device identifier, where the second device identifier is used to uniquely characterize the front-end sensing device 20, and the second target pool is a resource pool logically bound to the front-end sensing device 20 uniquely characterized by the second device identifier. In the above manner, when the service request is in-pool forwarding for representing the image retrieval request, the service management platform 12 forwards the service request to the corresponding second target pool according to the second request identifier in the image retrieval request, and the second target pool returns the corresponding target image, so that in-pool forwarding of the image data can be realized.
In a specific implementation scenario, the second request identifier may at least include a device identifier of the front-end aware device 20 that captured the requested image. The device identification may, for example, contain channel information in particular, as may be represented in a "city-region-way" format. For example, the device identification of the front-end aware device 20 that captured the requested image may specifically be "intersection of line C and line D in city B". Of course, the above examples are only one possible example of a device identification of the front-end aware device 20 and are not thereby limiting the specifics of the device identification. In addition, the device identifier may also be represented by a number shown in the pool_name, and the specific encoding mode of the device identifier is not limited herein.
In a specific implementation scenario, referring to fig. 1, the data nodes 111 may at least include an image gateway service, and the service management platform 12 may forward the image retrieval request to the image gateway service of each data node 111 in the second target pool, where the image gateway service returns to the service management platform 12 after the target image is successfully downloaded in the second target pool. Illustratively, after the image gateway service succeeds in downloading the target image in the second target pool, the URL (Uniform Resource Locator ) of the target image may specifically be returned to the service management platform 12. In the above manner, the image gateway service forwards the image retrieval request to the second target pool, and returns to the service management platform 12 after the target image is successfully downloaded, so that the in-pool forwarding of the image data can be realized through the image gateway service.
In a specific implementation scenario, the storage management platform 13 may further include a mapping configuration file, where the mapping configuration file defines a mapping relationship of logical binding between the front-end sensing device 20 and the resource pool 11, and the storage management platform 13 may query the mapping configuration file based on the second device identifier to determine to obtain the second target pool. As a possible example, the storage management platform 13 may specifically include a PaaS management service and a cloud storage management service, and specifically may find the second target pool by the PaaS management service according to the second device identifier and the mapping configuration file, and then forward the image retrieval request to the image gateway service in the second target pool to respond, where the image gateway service returns to the service management platform 12 after the target image is successfully downloaded in the second target pool. By means of the method, the second target pool is found through the mapping configuration file, and accuracy of determining the second target pool can be improved.
It should be noted that, as described above, the whole flow of the streaming media service is carried by the data node 111 inside the resource pool 11 after the device is accessed and the streaming media storage service calls the bottom cloud storage interface for storage, and then the data reading such as video data and image data is carried by the data node 111 inside the resource pool, that is, all the data are completed in the pool, no flow interaction exists between the pools, and the bandwidth pressure of the streaming media service on the core trunk network can be significantly reduced.
In an implementation scenario, please continue to refer to fig. 1, the data node 111 may further include a streaming media storage service, when the streaming media storage service calls the cloud storage interface to write data, the storage management platform 13 may select the data node 111 where the cloud storage interface sending the data writing request is located as a third target node (not illustrated), apply for and allocate a storage space in the resource pool 11 where the third target node is located, and then the cloud storage client stores each fragment to be written obtained by slicing the data to be written into each storage space. In the above manner, the cloud storage interface is called by the streaming media storage service to write data in the resource pool 11, so that self-completion in the resource pool 11 can be realized, and further, the persistent storage of the data generated by the front-end sensing device 20 can be realized.
In a specific implementation scenario, as a possible example, the cloud storage interface may specifically be an SDK interface exposed by the cloud storage client. Note that the SDK interface may be defined with such as: the specific contents of the SDK interface are not limited herein, as are protocols related to data writing, protocols related to data querying, and the like.
In one particular implementation scenario, as one possible example, the data write request may healthcare a node address, and the node address may specifically be an IP address of the third target node.
In a specific implementation scenario, as described above, the storage management platform 13 may include PaaS management services and cloud storage management services, and then the cloud storage management may load and identify the topology configuration file, and perform uniform load and space allocation on the data nodes 111 in the resource pool 11. When the streaming media storage service calls the cloud storage interface to write data, the cloud storage management service can apply for and allocate storage space in the resource pool 11 where the data node 111 is located preferentially according to the IP address of the data node 111 where the cloud storage interface of the data writing request is located, and then the cloud storage client stores the data slices to be written into the data node 111 in the pool respectively, so that the slice storage in the service data pool is realized, the storage flow does not cross the pool, and the bandwidth pressure of the core trunk network is reduced as much as possible.
In one implementation scenario, the storage management platform 13 is further configured to select, in response to a failure of any data node 111, the failed data node 111 as a fourth target node (not shown), and perform a pool disaster for each front-end sensing device 20 that preferentially transmits data to the fourth target node based on a fifth target node in the resource pool 11 where the fourth target node is located, where the fifth target node is the current normal data node 111. That is, when a node failure occurs in the resource pool 11, the original logical binding between the front-end sensing device 20 and the resource pool 11 is not changed, and the other normal data nodes 111 in the resource pool 11 share the streaming media service pressure of the failed node, so as to perform cross-pool disaster recovery. In the above manner, when node failure occurs, the storage management platform 13 is used for realizing pool content disaster in the resource pool 11 where the failed node is located, so that flow interaction between pools can be avoided as much as possible, and the bandwidth pressure of the core trunk network can be reduced. It should be noted that, unlike the pool disaster of the present application, in the multi-domain mode, if a node failure occurs, the service network transmits across domains, which results in an increase in the core trunk network pressure.
In a specific implementation scenario, taking three data nodes 111 included in the resource pool 11 as an example, after the storage management platform 13 finds that "node 01" in the resource pool 11 fails, it may be used as a fourth target node, and select a fifth target node from the "node 02" and the "node 03" that are currently normal in the resource pool 11 where it is located, so as to change front-end sensing device 20 that transmits data to the fourth target node (i.e., "node 01") in a penetrating manner to transmit data to the fifth target node instead. It should be noted that the specific number of the fifth target nodes is not limited in the present application. For example, still taking the foregoing example as an illustration, if there are two front-end sensing devices 20 that transmit data to the fourth target node preferentially, and the loads of both the node 02 and the node 03 that are normal currently in the resource pool 11 are equivalent, then the node 02 may be selected as one of the front-end sensing devices 20 to perform the pool disaster, and the node 03 may be selected as the other front-end sensing device 20 to perform the pool disaster; or if there are two front-end sensing devices 20 that preferentially transmit data to the fourth target node, and the load of the "node 02" and the "node 03" that are normal currently in the resource pool 11 is significantly higher than that of the latter, the "node 03" may be selected as the two front-end sensing devices 20 to perform pool disaster. Of course, the above examples are just a few possible examples of pool content disasters in practical applications, and other situations may be similar, and are not illustrated here.
In a specific implementation scenario, after the exception recovery, the intra-pool migration of the streaming media service may also be triggered, and because the intra-pool migration is limited to the interior of the resource pool 11, the cross-pool transmission of the service traffic will not occur, which is helpful to reduce the bandwidth pressure of the core backbone network as much as possible.
In a specific implementation scenario, the storage management platform 13 may determine the resource pool 11 in which the fourth target node is located based on the topology configuration file. Referring to fig. 2 in combination, for example, the storage management platform 13 may first obtain the IP address of the fourth target node, for example 192.168.2.10, and then determine, according to the topology configuration file shown in fig. 2, that the resource pool 11 where the fourth target node is located is the resource pool 11 with "pool_name" being "BeijingPool". Of course, the above examples are only one possible example of a practical application process, and other cases can be similarly considered, and are not exemplified here.
In one implementation scenario, the storage management platform 13 is further configured to, in response to a failure of any data node 111, select the failed data node 111 as a fourth target node (not shown), and select, among the data nodes 111 currently normal in the resource pool 11 where the fourth target node is located, the data node 111 as a sixth target node (not shown) based on the current load, and perform a data recovery task for the fourth target node at the sixth target node. In the above manner, the storage management platform 13 performs the data recovery task for the failed fourth target node based on the current load selection data node 111, so that load balancing can be implemented as much as possible on the premise that data recovery is implemented in the resource pool 11. It should be noted that, unlike the in-pool restoration of the present application, if a node failure occurs in the multi-domain mode, global restoration is adopted to cause cross-domain transmission of traffic, so that the bandwidth pressure of the core trunk network is greatly increased.
In a specific implementation scenario, similar to the foregoing pool disaster, the storage management platform 13 may determine, based on the topology configuration file, the resource pool 11 in which the fourth target node is located, and specifically, reference may be made to the foregoing related description, which is not repeated herein.
In a specific implementation scenario, when an in-pool fault occurs, the resource pool 11 where the fault node (i.e. the fourth target node) is located may be found according to the IP address of the fault node and the topology configuration file as shown in fig. 2, then a target recovery node (i.e. the sixth target node) is preferentially selected in the resource pool 11, and a recovery algorithm such as EC (error Coding) is adopted to perform a data recovery task, so as to implement abnormal data recovery, and since in-pool recovery does not need to transmit across pools, the bandwidth pressure of the core trunk network can be reduced as much as possible.
In the above solution, the cloud storage system 10 includes a plurality of resource pools 11, a service management platform 12 and a storage management platform 13, where the resource pools 11 include a plurality of data nodes 111, the resource pools 11 are used for logically binding with the front-end sensing device 20, output data of the front-end sensing device 20 is stored in the resource pools 11 logically binding with the front-end sensing device 20, and no traffic interaction exists between different resource pools 11, the service management platform 12 is respectively in communication connection with each resource pool 11, at least is used for responding to a service request, the service request includes at least one of in-pool access and in-pool forwarding, the storage management platform 13 is respectively in communication connection with each resource pool 11, at least is used for performing load balancing and scheduling management in each resource pool 11 based on a topology configuration file, and the data nodes 111 included in each resource pool 11 are defined by the topology configuration file, so on one hand, compared with the centralized construction, the cloud storage system 10 is beneficial to reducing construction difficulty and construction cost, on the other hand, the service storage system 11 is realized in the resource pools 11, and the service 11 is not in communication connection with each resource pool 11, and no traffic interaction between the resource pools 11 includes at least one of in-pool access and forwarding, the service request includes in-pool access and forwarding, the service management platform and forwarding, at least is beneficial to realizing a unified service management, and a core management mode, as much as well as possible, and the service management is better as well as a core management platform is capable of implementing the core management. Therefore, the load pressure of the core trunk network can be reduced as much as possible and the convenience of system management can be improved on the premise of reducing the construction difficulty and the construction cost of the cloud storage system 10.
Referring to fig. 3, fig. 3 is a flow chart illustrating an embodiment of a cloud storage management method according to the present application. It should be noted that, in the embodiment of the present disclosure, the flow steps may be performed by a storage management platform in the cloud storage system. In addition, the embodiments of the present disclosure focus on the flow links executed by the storage management platform, the flow links participated by the storage management platform or other related links, which may be combined with the related description in the embodiments of the cloud storage system, and are not described herein again. Specifically, embodiments of the present disclosure may include the steps of:
Step S31: and obtaining a topology configuration file, and establishing communication connection with each resource pool in the cloud storage system.
In the embodiment of the disclosure, a topology configuration file is defined with each resource pool and data nodes contained in the resource pools, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, no traffic interaction exists among different resource pools, the cloud storage system further comprises a service management platform which is in communication connection with each resource pool and is used for receiving and responding to service requests, and the service requests comprise at least one of in-pool access and in-pool forwarding. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
Step S32: and respectively carrying out load balancing and scheduling management in each resource pool based on the topology configuration file.
In one implementation scenario, in response to the front-end aware device and the resource pool having performed logical binding, the resource pool performing logical binding with the front-end aware device may be selected as a first target pool, and then based on the topology configuration file and load balancing of each data node in the first target pool, the data node is selected in the first target pool, and is used as a first target node for the front-end aware device to preferentially transmit the data. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In one implementation scenario, in response to the current load of the first target node not meeting the preset condition, a data node other than the first target node may be selected in the first target pool as a second target node for the front-end sensing device to transparently transmit the current output data based on the topology configuration file and load balancing of each data node in the first target pool. It should be noted that, the first target pool is a resource pool that performs logical binding with the front-end sensing device, and the first target node is a data node that transmits data out of the first target pool in a priority manner. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In one implementation, the mapping profile may also be obtained when the business request is in-pool forwarding of the characterization image retrieval request. It should be noted that, the mapping configuration file defines a mapping relationship of logical binding between the front-end aware device and the resource pool. On the basis, the mapping configuration file is queried based on the image retrieval request, a resource pool which is logically bound with the target perception equipment is determined to be used as a second target pool, the image retrieval request is forwarded to the second target pool by the service management platform, and the second target pool responds to the image retrieval request to return the corresponding target image. The target sensing device is a front-end sensing device from which the image is requested to be read by the image reading request. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In an implementation scenario, the data node may further include a streaming media storage service, and the cloud storage interface is called to perform data writing in response to the streaming media storage service, a data node where the cloud storage interface sending the data writing request is located is selected as a third target node, and then a storage space is applied and allocated in a resource pool where the third target node is located based on a topology configuration file, so that each fragment to be written obtained by slicing the data to be written by the cloud storage client is stored in each storage space respectively. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In a specific implementation scenario, the cloud storage interface is an SDK interface exposed by the cloud storage client.
In one particular implementation, the data write request includes a node address, which is the IP address of the third target node.
In one implementation scenario, in response to any data node failing, the failed data node may be selected as a fourth target node, and based on the topology configuration file, a current normal data node is selected as a fifth target node in a resource pool where the fourth target node is located. And then based on the fifth target node, carrying out pool content disaster for each front-end sensing device which transmits data to the fourth target node in a priority transmission mode. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In one implementation scenario, in response to a failure of any data node, the failed data node may be selected as a fourth target node, a resource pool in which the fourth target node is located is determined based on a topology configuration file, and then, in each data node currently normal in the determined resource pool, the data node is selected as a sixth target node based on a current load, and in the sixth target node, a data recovery task is executed for the fourth target node. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
According to the scheme, the topology configuration file is obtained, and communication connection is established between the topology configuration file and each resource pool in the cloud storage system; the topology configuration file defines each resource pool and data nodes contained in the resource pools, the resource pools are used for being logically bound with the front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, traffic interaction does not exist among different resource pools, the cloud storage system further comprises a service management platform which is in communication connection with each resource pool and is used for receiving and responding to service requests, the service requests comprise at least one of in-pool access and in-pool forwarding, and load balancing and scheduling management are respectively carried out in each resource pool based on the topology configuration file, so that on one hand, a plurality of resource pools are constructed in a scattered mode, compared with concentrated construction, the construction difficulty and construction cost of the cloud storage system are reduced, on the other hand, traffic interaction such as storage, access and forwarding is realized in the resource pools, traffic interaction among the resource pools is avoided as much as possible, the traffic such as traffic is over a core trunk network, the load pressure of the core trunk network is reduced, and on the other hand, the plurality of resource pools are managed together through a set of management platform (comprising the service management platform and the storage management platform) and the storage management platform, and the unified management mode can be realized. Therefore, the load pressure of the core trunk network can be reduced as much as possible on the premise of reducing the construction difficulty and the construction cost of the cloud storage system, and the convenience of system management is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a cloud storage management method according to another embodiment of the application. It should be noted that, in the embodiment of the present disclosure, the flow steps are performed by a service management platform in the cloud storage system. In addition, the embodiments of the present disclosure focus on the flow links executed by the service management platform, the flow links participated by the service management platform or other related links, which may be combined with the related description in the embodiments of the cloud storage system, and are not described herein again. Specifically, embodiments of the present disclosure may include the steps of:
step S41: and establishing communication connection with each resource pool in the cloud storage system.
In the embodiment of the disclosure, the resource pool comprises a plurality of data nodes, the resource pool is used for being logically bound with the front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pool logically bound with the front-end sensing equipment, no traffic interaction exists among different resource pools, and the cloud storage system further comprises a storage management platform which is in communication connection with each resource pool and is used for carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, and the topology configuration file is defined with each resource pool and the data nodes contained in the resource pool. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
Step S42: and receiving and responding to the service request.
In an embodiment of the present disclosure, the service request includes at least one of in-pool access and in-pool forwarding. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In one implementation scenario, in response to a service request being in-pool access, a relative distance between the front-end sensing device and each resource pool may be obtained based on a physical location of the front-end sensing device, and then a logical binding between one resource pool and the front-end sensing device may be selected based on the relative distance between the front-end sensing device and each resource pool. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In one implementation scenario, in response to a service request being an in-pool forwarding characterizing a video playback request, forwarding the video playback request to a first target pool based on a first request identification in the video playback request, to return, by the first target pool, a target video corresponding to the video playback request in the first target pool in response to the video playback request. It should be noted that, the first request identifier includes a first device identifier, a first pool identifier and a video acquisition period, the first device identifier is used for uniquely characterizing the front-end sensing device, the first pool identifier is used for uniquely characterizing the resource pool, the first target pool is a resource pool uniquely characterized by the first pool identifier, and the resource pool uniquely characterized by the first pool identifier is logically bound with the front-end sensing device uniquely characterized by the first device identifier. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
In one implementation scenario, in response to a business request being an in-pool forwarding characterizing an image retrieval request, the image retrieval request is forwarded to a second target pool based on a second request identification in the image retrieval request, so that a corresponding target image is returned by the second target pool in response to the image retrieval request. It should be noted that, the second request identifier includes at least a second device identifier, where the second device identifier is used to uniquely characterize the front-end sensing device, and the second target pool is a resource pool logically bound to the front-end sensing device uniquely characterized by the second device identifier. The foregoing embodiments of the cloud storage system may be referred to specifically, and will not be described herein.
According to the scheme, communication connection is established with each resource pool in the cloud storage system, the resource pools comprise a plurality of data nodes, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, traffic interaction does not exist among different resource pools, the cloud storage system further comprises a storage management platform which is in communication connection with each resource pool and is used for carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, the topology configuration file is defined with each resource pool and the data nodes contained in the resource pools, and then service requests are received and responded, and the service requests comprise at least one of access in the pool and forwarding in the pool, so that on one hand, compared with centralized construction, the construction difficulty and construction cost of the cloud storage system are reduced, on the other hand, traffic interaction among the resource pools is realized in the resource pools, traffic such as storage and access and forwarding is avoided as far as possible, the traffic such as the core dry load pressure is reduced, and the service is managed through a plurality of sleeves together, and the service management platform is capable of realizing a unified management mode (namely, the service management platform is better than a unified management platform). Therefore, the load pressure of the core trunk network can be reduced as much as possible on the premise of reducing the construction difficulty and the construction cost of the cloud storage system, and the convenience of system management is improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of an electronic device. The electronic device 50 at least includes a communication circuit 51, a memory 52 and a processor 53, wherein the communication circuit 51 is coupled to the memory 52 and the processor 53, respectively, the memory 52 stores program instructions, and the processor 53 is configured to execute the program instructions to implement the steps in the foregoing embodiments of the cloud storage management method. Specifically, the electronic device 50 may include, but is not limited to, a server or the like, without limitation.
In particular, the processor 53 may also be referred to as a CPU (Central Processing Unit ). The processor 53 may be an integrated circuit chip with signal processing capabilities. Processor 53 may also be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 53 may be commonly implemented by a plurality of circuit-forming chips.
In the above scheme, the electronic device 50 implements the steps in any one of the embodiments of the cloud storage management method, on one hand, because a plurality of resource pools are built in a scattered manner, the difficulty and cost of building the cloud storage system are reduced compared with those of centralized building, on the other hand, because the storage, access, forwarding and other services are implemented in the resource pools, no traffic interaction exists between the resource pools, the traffic of the storage, the service and the like can be prevented from passing through a core trunk network as much as possible, the load pressure of the core trunk network is reduced, and on the other hand, because the plurality of resource pools are managed together through a set of management platform (including the service management platform and the storage management platform), and unified management can be implemented compared with a multi-domain mode. Therefore, the load pressure of the core trunk network can be reduced as much as possible on the premise of reducing the construction difficulty and the construction cost of the cloud storage system, and the convenience of system management is improved.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a frame of an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 60 stores program instructions 61 executable by a processor, the program instructions 61 being capable of being executed to implement the steps of any of the cloud storage management method embodiments described above.
In the above scheme, the computer readable storage medium 60 implements the steps in any one of the embodiments of the cloud storage management method, on one hand, because a plurality of resource pools are built in a scattered manner, the construction difficulty and construction cost of the cloud storage system are reduced compared with those of the concentrated construction, on the other hand, because the storage, access, forwarding and other services are implemented in the resource pools, no traffic interaction exists between the resource pools, the traffic of the storage, the service and the like can be avoided from passing through the core trunk network as much as possible, the load pressure of the core trunk network is reduced, and on the other hand, because the plurality of resource pools are managed together through a set of management platform (including the service management platform and the storage management platform), and unified management can be implemented compared with the multi-domain mode. Therefore, the load pressure of the core trunk network can be reduced as much as possible on the premise of reducing the construction difficulty and the construction cost of the cloud storage system, and the convenience of system management is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
Claims (18)
1. A cloud storage system, comprising:
the system comprises a plurality of resource pools, a plurality of data processing units and a plurality of data processing units, wherein the resource pools comprise a plurality of data nodes, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, and no flow interaction exists among different resource pools;
The service management platform is respectively in communication connection with each resource pool and is at least used for responding to service requests, and the service requests comprise at least one of in-pool access and in-pool forwarding;
And the storage management platform is respectively in communication connection with each resource pool and is at least used for carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, and the topology configuration file is defined with each resource pool and data nodes contained in the resource pools.
2. The system of claim 1, wherein the service management platform is specifically configured to respond to the service request for in-pool access, obtain, based on a physical location of the front-end aware device, relative distances between the front-end aware device and each of the resource pools, and select, based on the relative distances between the front-end aware device and each of the resource pools, a logical binding between one of the resource pools and the front-end aware device.
3. The system of claim 1, wherein the storage management platform is further configured to select, after the front-end aware device performs the logical binding with the resource pool, the data node as a first target node for the front-end aware device to preferentially and transparently pass the output data based on load balancing of the data nodes in the resource pool.
4. The system of claim 1, wherein the storage management platform is further configured to select, when a current load of a first target node corresponding to the front-end sensing device does not meet a preset condition, a data node other than the first target node as a second target node for the front-end sensing device to transparently transmit the current output data based on load balancing of each data node in a resource pool logically bound to the front-end sensing device;
after the front-end sensing device and the resource pool execute the logic binding, the first target node determines a data node which preferentially and transparently transmits the output data, and is positioned in the resource pool logically bound with the front-end sensing device.
5. The system according to claim 1, wherein when the service request is in-pool forwarding representing a video playback request, the service management platform is specifically configured to forward the video playback request to a first target pool based on a first request identifier in the video playback request, and the first target pool returns a target video corresponding to the video playback request in the first target pool in response to the video playback request;
The first request identifier comprises a first device identifier, a first pool identifier and a video acquisition period, wherein the first device identifier is used for uniquely characterizing the front-end sensing device, the first pool identifier is used for uniquely characterizing the resource pool, the first target pool is a resource pool uniquely characterized by the first pool identifier, and the resource pool uniquely characterized by the first pool identifier is logically bound with the front-end sensing device uniquely characterized by the first device identifier.
6. The system of claim 1, wherein when the service request is forwarded in a pool characterizing an image retrieval request, the service management platform is specifically configured to forward the image retrieval request to a second target pool based on a second request identifier in the image retrieval request, and the second target pool returns a corresponding target image in response to the image retrieval request;
the second request identifier at least comprises a second device identifier, the second device identifier is used for uniquely characterizing the front-end sensing device, and the second target pool is a resource pool logically bound with the front-end sensing device uniquely characterized by the second device identifier.
7. The system of claim 1, wherein the data node further comprises a streaming media storage service, when the streaming media storage service calls a cloud storage interface to write data, the storage management platform selects a data node where the cloud storage interface sending a data writing request is located as a third target node, applies for and allocates a storage space in a resource pool where the third target node is located, and the cloud storage client stores each fragment to be written obtained by slicing the data to be written in each storage space.
8. The system of claim 1, wherein the storage management platform is further configured to, in response to any of the data nodes failing, select the failed data node as a fourth target node and perform at least one of:
Based on a fifth target node in a resource pool where the fourth target node is located, performing pool content disaster for each front-end sensing device which preferentially and transparently transmits the output data to the fourth target node, wherein the fifth target node is a current normal data node;
and selecting the data node as a sixth target node based on the current load in each data node which is normal in the resource pool where the fourth target node is located, and executing a data recovery task for the fourth target node at the sixth target node.
9. A cloud storage management method, comprising:
Obtaining a topology configuration file and establishing communication connection with each resource pool in a cloud storage system; the topology configuration file defines each resource pool and data nodes contained in the resource pools, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, no traffic interaction exists among different resource pools, the cloud storage system further comprises a service management platform which is in communication connection with each resource pool and is used for receiving and responding to service requests, and the service requests comprise at least one of in-pool access and in-pool forwarding;
And respectively carrying out load balancing and scheduling management in each resource pool based on the topology configuration file.
10. The method according to claim 9, wherein the method further comprises:
in response to the front-end aware device and the resource pool having executed the logical binding, selecting the resource pool with which the front-end aware device executes the logical binding as a first target pool;
And selecting the data nodes in the first target pool based on the topology configuration file and the load balance of the data nodes in the first target pool, and taking the data nodes as first target nodes of the front-end sensing equipment for preferentially and transparently transmitting the output data.
11. The method according to claim 9, wherein the method further comprises:
Responding to the fact that the current load of a first target node does not meet a preset condition, and selecting data nodes except the first target node in a first target pool as second target nodes of the front-end sensing equipment for transparently transmitting the current output data based on the topology configuration file and load balance of each data node in the first target pool;
The first target pool is a resource pool for executing the logic binding with the front-end sensing device, and the first target node is a data node in the first target pool for preferentially and transparently transmitting the output data.
12. The method of claim 9, wherein when the service request is an in-pool forwarding characterizing an image retrieval request, the method further comprises:
Obtaining a mapping configuration file; wherein, the mapping configuration file defines the mapping relation of the logic binding between the front-end sensing equipment and the resource pool;
Inquiring the mapping configuration file based on the image retrieval request, determining a resource pool which is logically bound with target perception equipment and is used as a second target pool, forwarding the image retrieval request to the second target pool by the service management platform, and returning a corresponding target image by the second target pool in response to the image retrieval request; the target sensing device is a front-end sensing device from which the image retrieval request requests to retrieve an image.
13. The method of claim 9, wherein the data node further comprises a streaming media storage service, the method further comprising:
responding to the streaming media storage service to call a cloud storage interface for data writing, and selecting a data node where the cloud storage interface sending a data writing request is located as a third target node;
And applying for and distributing a storage space in a resource pool where the third target node is located based on the topology configuration file, so that each fragment to be written obtained by slicing the data to be written by the cloud storage client is respectively stored in each storage space.
14. The method according to claim 9, wherein the method further comprises:
In response to any of the data nodes failing, selecting the failed data node as a fourth target node and performing at least one of:
selecting a current normal data node as a fifth target node in a resource pool where the fourth target node is located based on the topology configuration file, and performing pool content disaster for each front-end sensing device which preferentially and transparently transmits the output data to the fourth target node based on the fifth target node;
And determining a resource pool in which the fourth target node is located based on the topology configuration file, selecting the data node as a sixth target node based on a current load in each data node which is normal in the determined resource pool, and executing a data recovery task for the fourth target node in the sixth target node.
15. A cloud storage management method, comprising:
Establishing communication connection with each resource pool in the cloud storage system; the cloud storage system comprises a plurality of resource pools, a storage management platform and a data management platform, wherein the resource pools comprise a plurality of data nodes, the resource pools are used for being logically bound with front-end sensing equipment, output data of the front-end sensing equipment is stored in the resource pools logically bound with the front-end sensing equipment, no traffic interaction exists among different resource pools, the storage management platform is in communication connection with each resource pool and is used for carrying out load balancing and scheduling management in each resource pool based on a topology configuration file, and the topology configuration file is defined with each resource pool and the data nodes contained in the resource pool;
Receiving and responding to the service request; wherein the service request includes at least one of in-pool access, in-pool forwarding.
16. The method of claim 15, wherein the method further comprises:
responding to the service request to access in a pool, and acquiring relative distances between the front-end sensing equipment and each resource pool based on the physical position of the front-end sensing equipment;
and selecting one resource pool to be logically bound with the front-end sensing equipment based on the relative distance between the front-end sensing equipment and each resource pool.
17. The method of claim 15, wherein the method further comprises:
Responding to the service request to represent in-pool forwarding of a video playback request, and forwarding the video playback request to a first target pool based on a first request identification in the video playback request so as to enable the first target pool to respond to the video playback request to return target videos corresponding to the video playback request in the first target pool;
The first request identifier comprises a first device identifier, a first pool identifier and a video acquisition period, wherein the first device identifier is used for uniquely characterizing the front-end sensing device, the first pool identifier is used for uniquely characterizing the resource pool, the first target pool is a resource pool uniquely characterized by the first pool identifier, and the resource pool uniquely characterized by the first pool identifier is logically bound with the front-end sensing device uniquely characterized by the first device identifier.
18. The method of claim 15, wherein the method further comprises:
Responding to the service request to represent in-pool forwarding of an image retrieval request, and forwarding the image retrieval request to a second target pool based on a second request identifier in the image retrieval request so as to enable the second target pool to respond to the image retrieval request to return a corresponding target image;
the second request identifier at least comprises a second device identifier, the second device identifier is used for uniquely characterizing the front-end sensing device, and the second target pool is a resource pool logically bound with the front-end sensing device uniquely characterized by the second device identifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411177403.3A CN118694764A (en) | 2024-08-26 | 2024-08-26 | Cloud storage system and cloud storage management method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411177403.3A CN118694764A (en) | 2024-08-26 | 2024-08-26 | Cloud storage system and cloud storage management method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118694764A true CN118694764A (en) | 2024-09-24 |
Family
ID=92775007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411177403.3A Pending CN118694764A (en) | 2024-08-26 | 2024-08-26 | Cloud storage system and cloud storage management method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118694764A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201116830D0 (en) * | 2010-10-18 | 2011-11-09 | Avaya Inc | Resource allocation using shared resource pools |
CN107800694A (en) * | 2017-10-16 | 2018-03-13 | 浙江大华技术股份有限公司 | A kind of front end equipment access method, device, server and storage medium |
CN107967117A (en) * | 2016-10-20 | 2018-04-27 | 杭州海康威视数字技术股份有限公司 | A kind of data storage, reading, method for cleaning, device and cloud storage system |
CN110798362A (en) * | 2019-12-03 | 2020-02-14 | 河南水利与环境职业学院 | Multi-data center online management system and management method based on Internet of things |
CN111404978A (en) * | 2019-09-06 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Data storage method and cloud storage system |
CN111767139A (en) * | 2020-06-19 | 2020-10-13 | 四川九洲电器集团有限责任公司 | Cross-region multi-data-center resource cloud service modeling method and system |
CN114760313A (en) * | 2020-12-29 | 2022-07-15 | 中国联合网络通信集团有限公司 | Service scheduling method and service scheduling device |
WO2024082861A1 (en) * | 2022-10-20 | 2024-04-25 | 天翼数字生活科技有限公司 | Cloud storage scheduling system applied to video monitoring |
CN117997734A (en) * | 2022-10-31 | 2024-05-07 | 华为云计算技术有限公司 | Management method and system for multi-resource pool network |
-
2024
- 2024-08-26 CN CN202411177403.3A patent/CN118694764A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201116830D0 (en) * | 2010-10-18 | 2011-11-09 | Avaya Inc | Resource allocation using shared resource pools |
CN107967117A (en) * | 2016-10-20 | 2018-04-27 | 杭州海康威视数字技术股份有限公司 | A kind of data storage, reading, method for cleaning, device and cloud storage system |
CN107800694A (en) * | 2017-10-16 | 2018-03-13 | 浙江大华技术股份有限公司 | A kind of front end equipment access method, device, server and storage medium |
CN111404978A (en) * | 2019-09-06 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Data storage method and cloud storage system |
CN110798362A (en) * | 2019-12-03 | 2020-02-14 | 河南水利与环境职业学院 | Multi-data center online management system and management method based on Internet of things |
CN111767139A (en) * | 2020-06-19 | 2020-10-13 | 四川九洲电器集团有限责任公司 | Cross-region multi-data-center resource cloud service modeling method and system |
CN114760313A (en) * | 2020-12-29 | 2022-07-15 | 中国联合网络通信集团有限公司 | Service scheduling method and service scheduling device |
WO2024082861A1 (en) * | 2022-10-20 | 2024-04-25 | 天翼数字生活科技有限公司 | Cloud storage scheduling system applied to video monitoring |
CN117997734A (en) * | 2022-10-31 | 2024-05-07 | 华为云计算技术有限公司 | Management method and system for multi-resource pool network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11159411B2 (en) | Distributed testing service | |
US10275232B1 (en) | Architecture for incremental deployment | |
US10637916B2 (en) | Method and device for storage resource allocation for video cloud storage | |
US10425502B2 (en) | System and method for acquiring, processing and updating global information | |
WO2017107018A1 (en) | Method, device, and system for discovering the relationship of applied topology | |
CN111966289B (en) | Partition optimization method and system based on Kafka cluster | |
US20100235509A1 (en) | Method, Equipment and System for Resource Acquisition | |
KR20160044471A (en) | Method and system of dispatching requests in a content delivery network | |
CN101287011A (en) | Method, system and device for responding service request from user in content distributing network | |
CN112256495A (en) | Data transmission method and device, computer equipment and storage medium | |
US20140337471A1 (en) | Migration assist system and migration assist method | |
US20180248772A1 (en) | Managing intelligent microservices in a data streaming ecosystem | |
US20170153909A1 (en) | Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine | |
CN113301079B (en) | Data acquisition method, system, computing device and storage medium | |
CN112491719A (en) | Network node selection method, equipment and storage medium | |
US9544371B1 (en) | Method to discover multiple paths to disk devices cluster wide | |
CN114466031B (en) | CDN system node configuration method, device, equipment and storage medium | |
CN112073212A (en) | Parameter configuration method, device, terminal equipment and storage medium | |
CN112632124B (en) | Multimedia information acquisition method, device, system, storage medium and electronic device | |
CN112102063B (en) | Data request method, device, equipment, platform and computer storage medium | |
CN112532666B (en) | Reverse proxy method, device, storage medium and equipment | |
CN111107039A (en) | Communication method, device and system based on TCP connection | |
CN118694764A (en) | Cloud storage system and cloud storage management method | |
CN110347656B (en) | Method and device for managing requests in file storage system | |
CN112019604A (en) | Edge data transmission method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |