Nothing Special   »   [go: up one dir, main page]

CN113596158A - Scene-based algorithm configuration method and device - Google Patents

Scene-based algorithm configuration method and device Download PDF

Info

Publication number
CN113596158A
CN113596158A CN202110867970.1A CN202110867970A CN113596158A CN 113596158 A CN113596158 A CN 113596158A CN 202110867970 A CN202110867970 A CN 202110867970A CN 113596158 A CN113596158 A CN 113596158A
Authority
CN
China
Prior art keywords
algorithm
edge domain
data
cloud center
domain node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110867970.1A
Other languages
Chinese (zh)
Other versions
CN113596158B (en
Inventor
林洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202110867970.1A priority Critical patent/CN113596158B/en
Publication of CN113596158A publication Critical patent/CN113596158A/en
Application granted granted Critical
Publication of CN113596158B publication Critical patent/CN113596158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a scene-based algorithm configuration method and device, relates to the technical field of Internet of things, and can configure an algorithm package to an edge domain node according to the real-time scene analysis requirements of a user so as to analyze data received from a terminal device according to the configured algorithm package, thereby improving user experience. The specific scheme includes that the cloud center receives sampling data sent by a first edge domain node, the cloud center determines multiple algorithm packages corresponding to the types of the sampling data, and the data types of the sampling data are used for reflecting scenes when terminal equipment acquires the sampling data. The cloud center executes each algorithm packet in the multiple algorithm packets to the sampled data to obtain an algorithm result corresponding to each algorithm packet; and the cloud center determines a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package, and the target algorithm package is used for analyzing the data acquired by the terminal equipment. The method and the device are used for the process of algorithm configuration of the Internet of things.

Description

Scene-based algorithm configuration method and device
Technical Field
The embodiment of the application relates to the technical field of Internet of things, in particular to an algorithm configuration method and device based on scenes.
Background
The internet of things is used as a high integration and comprehensive application of a new generation of information technology, has the characteristics of strong permeability, large driving effect and good comprehensive benefit, and is a further promoter for the development of information industry after computers, internet and mobile communication networks. The application and development of the Internet of things are beneficial to promoting the production and life and the social management mode to change towards the direction of intellectualization, refinement and networking, and the social management and public service level is greatly improved.
At present, the internet of things needs to accelerate the development of intelligence by various technologies based on artificial intelligence in the fields of medical treatment, traffic, environmental protection, security and the like in urban construction, and intelligent analysis of videos/pictures is an indispensable component. When the existing video analysis is used for video monitoring, the number and the type of video equipment can be determined according to scene information by acquiring scene information (including an analysis target, the data amount of the analysis target and parameters influencing the video equipment), then a sample video sequence of the analysis target and equipment application additional information of the video equipment are acquired, and an optimal algorithm corresponding to a scene is determined according to the number and the type of the video equipment, the sample video sequence and the equipment application additional information, so that the video analysis can be performed according to the optimal algorithm.
It is understood that the video analysis method is a static optimal algorithm determination method when determining the optimal algorithm, and is determined according to the determined parameters, i.e. the number and type of the video devices, the sample video sequence and the device application additional information. When the optimal algorithm is determined, the optimal algorithm is not updated. Thus, the determined static optimization algorithm cannot meet the real-time scene analysis requirement of the user. Moreover, the determination method of the optimal algorithm relies too much on information other than the content to be analyzed (sample video sequence), i.e. the number and type of video devices, and device application additional information, which makes the implementation of the optimal algorithm determination complicated.
Disclosure of Invention
The embodiment of the application provides a scene-based algorithm configuration method and a scene-based algorithm configuration device, which can configure an algorithm package to an edge domain node according to the real-time scene analysis requirement of a user, so that data received from a terminal device is analyzed according to the configured algorithm package, the user experience is improved, and the problems of complex realization caused by the fact that the number and the type of video devices are different from the data acquired by the terminal device and the device application additional information participates in analysis are solved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
the first aspect provides a scene-based algorithm configuration method, which comprises the following steps that a cloud center receives sampling data sent by a first edge domain node, and determines a plurality of algorithm packages corresponding to the data types of the sampling data; the data type of the sampling data is used for reflecting the scene when the terminal equipment acquires the sampling data. The cloud center executes each algorithm packet in the multiple algorithm packets to the sampled data to obtain an algorithm result corresponding to each algorithm packet; and the cloud center determines a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package, and the target algorithm package is used for analyzing the data acquired by the terminal equipment.
Therefore, according to the scene-based algorithm configuration method, the edge domain nodes can report the sampled data acquired from the terminal equipment, the cloud center performs algorithm comparison of various algorithm packages on the sampled data, and an optimal algorithm package, namely a target algorithm package, is decided. The type of the sampling data can reflect the scene when the terminal device acquires the sampling data, so that when the scene when the terminal device acquires the sampling data changes, the data type of the sampling data determined by the cloud center also changes along with the scene, the multiple algorithm packages determined by the cloud center also change along with the data type, and the target algorithm package in the multiple algorithm packages finally determined by the cloud center from the multiple algorithm packages also changes. Therefore, in the application, the target algorithm packet decided by the cloud center can be different according to different scenes when the terminal equipment acquires data, the requirements of a user on different scene analysis in real time can be met, and the user experience is improved. In addition, in the application, when the cloud center decides the optimal algorithm packet, the cloud center does not need to depend on the number and types of video devices except for the sampled data and the device application additional information of the video devices in the prior art to participate in the decision, and the implementation complexity is low.
In one possible design, the sampling data is data obtained by performing video frame extraction on video data shot by the terminal device, or the sampling data is picture data shot by the terminal device. The sample data may be understood to be acquired in real time, or periodically. The terminal device can be understood as a camera accessed to the first edge domain node, the camera can report current scene data in a visual angle range, namely video frame extraction data or snapshot data, so that the cloud center can make a decision on an optimal algorithm when the first edge domain node processes the current scene data, the requirement of a user on real-time scene analysis is met, and the user experience is improved.
In one possible design, the cloud center executing each algorithm package of the plurality of algorithm packages on the sampled data to obtain the algorithm result corresponding to each algorithm package includes: the cloud center determines a plurality of evaluation indexes in an algorithm evaluation system corresponding to the data types of the sampled data; the cloud center refers to the multiple evaluation indexes, and analyzes the sampled data through each algorithm package in the multiple algorithm packages to obtain the values of the multiple evaluation indexes corresponding to each algorithm package; the cloud center determines an algorithm evaluation value corresponding to each algorithm package according to the values of the evaluation indexes corresponding to each algorithm package, and takes the algorithm evaluation value corresponding to each algorithm package as an algorithm result corresponding to each algorithm package;
the cloud center determines a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package, and the method comprises the following steps: and the cloud center determines the algorithm packet with the highest algorithm evaluation value in the multiple algorithm packets as a target algorithm packet according to the algorithm evaluation value corresponding to each algorithm packet.
That is to say, according to the type of the sampling data, the optimal algorithm in the multiple algorithms of the type can be determined in an algorithm comparison mode, that is, an algorithm evaluation mode, so that the requirement of the user on the scene analysis in real time is met, and the user experience is improved.
In one possible design, the method further includes: the cloud center sends a target algorithm packet and an algorithm analysis task to the first edge domain node, and the algorithm analysis task is used for indicating the first edge domain node to analyze data received from the terminal device through the target algorithm packet. The target algorithm packet is determined along with the data type of the sampling data acquired by the terminal equipment, namely determined according to the scene, so that the first edge domain node can better fit the current scene of the terminal equipment when executing the algorithm analysis task according to the target algorithm packet, the algorithm analysis result is more practical, and the real-time scene analysis requirement of a user is met.
In one possible design, the method further includes: the cloud center determines residual computing power respectively corresponding to a plurality of edge domain nodes managed by the cloud center, wherein the plurality of edge domain nodes comprise a first edge domain node; the cloud center determines a second edge domain node according to the residual computing power respectively corresponding to the edge domain nodes, wherein the second edge domain node is the edge domain node with the highest residual computing power in the nodes; and the cloud center sends a target algorithm packet and an algorithm analysis task to the second edge domain node, wherein the algorithm analysis task is used for indicating the second edge domain node to analyze the data, received from the first edge domain node, collected by the terminal equipment through the target algorithm packet. That is to say, after determining a target algorithm packet according to sampling data collected by the terminal device and reported by the first edge domain node, the cloud center is not anxious to directly send the target algorithm packet to the first edge domain node, but may first determine a second edge domain node with the strongest residual computing power among a plurality of edge domain nodes governed by the cloud center, and may send the target algorithm packet and an algorithm analysis task to the second edge domain node. Therefore, the purpose of reasonably using the calculation force of the edge domain nodes can be achieved. Wherein the second edge domain node may also be the same node as the first edge domain node.
In one possible design, the determining, by the cloud center, the residual computation power corresponding to each of a plurality of edge domain nodes managed by the cloud center includes: and the cloud center determines the residual computing power corresponding to each edge domain node according to the hardware resource performance, the data access amount and the number of the accessed terminal devices corresponding to each edge domain node in the plurality of edge domain nodes. The hardware resource performance can be understood as the residual computing power of the CPU and the GPU of the edge domain node. The data access amount can be used for reflecting the real-time algorithm execution total number of the edge domain nodes. The number of terminal devices accessed by the edge domain node can be understood as the number of terminal devices establishing communication connection with the edge domain node.
In a second aspect, a method for configuring a scene-based algorithm is provided, including:
the edge domain node sends sampling data to the cloud center; the edge domain node receives a first target algorithm packet sent by the cloud center, wherein the first target algorithm packet is determined by an analysis result obtained by the cloud center executing each algorithm packet in multiple algorithm packets on the sampled data and corresponding to each algorithm packet; the multiple algorithm packages are determined by the cloud center according to the data types of the sampled data; the type of the sampling data is used for reflecting a scene when the terminal equipment acquires the sampling data; and the edge domain node loads the first target algorithm packet so as to analyze the data received from the terminal equipment through the first target algorithm packet.
Therefore, for the edge domain nodes, the edge domain nodes can analyze the data received from the terminal equipment according to the first target algorithm packet which is determined by the cloud center and changes along with the scene of the sampled data, namely the first target algorithm packet can be different along with the difference of the comparison result of the algorithms, the real-time scene analysis requirements of the user can be met, and the user experience is improved.
In one possible design, the sampling data is data obtained by performing video frame extraction on video data shot by the terminal device, or the sampling data is picture data shot by the terminal device. The advantageous effects of this design can be seen in the description of the first aspect.
In one possible design, the loading, by the edge domain node, of the first target algorithm package, so as to analyze the data received from the terminal device through the first target algorithm package includes: the edge domain node determines whether the edge domain node locally loads a first target algorithm package; and if the edge domain node is determined not to be locally integrated with the first target algorithm package, the edge domain node loads the first target algorithm package so as to analyze the data received from the terminal equipment through the first target algorithm package. That is, it is possible that the optimal algorithm (the first target algorithm package) determined by the cloud center is the algorithm currently loaded and operated by the edge domain node, and at this time, the edge domain may not load the first target algorithm package.
In one possible design, the edge domain nodes receive a second target algorithm packet and an algorithm analysis task which are sent by the cloud center, the second target algorithm packet is determined by the cloud center based on sampling data sent by other edge domain nodes, and the algorithm analysis task comprises information of the terminal equipment to be analyzed; the edge domain node acquires data acquired by the terminal equipment corresponding to the terminal equipment information to be analyzed from other edge domain nodes; and the edge domain nodes load the second target algorithm packet so as to analyze the data received from other edge nodes through the second target algorithm packet. That is to say, after one edge domain node reports data collected from the terminal device to the cloud center, it is possible that the cloud center does not directly send the target algorithm package to the edge domain node, but determines an optimal edge domain node capable of executing the target algorithm package, and sends the target algorithm package to the optimal edge domain node for recording. Therefore, the computing power of each edge domain node can be balanced, one edge domain node can be mixed with algorithm packages under various scenes, and the computing power sharing of edge domain node clusters based on mixed scenes can be realized to a certain extent.
In a third aspect, a cloud center is provided, comprising: the communication module is used for receiving sampling data sent by a first edge domain node; the processing module is used for determining various algorithm packages corresponding to the data types of the sampling data, and the data types of the sampling data are used for reflecting scenes when the terminal equipment acquires the sampling data; the processing module is also used for executing each algorithm packet in the multiple algorithm packets on the sampling data to obtain an algorithm result corresponding to each algorithm packet; and the processing module is also used for determining a target algorithm packet from the multiple algorithm packets according to the algorithm result corresponding to each algorithm packet, and the target algorithm packet is used for analyzing the data acquired by the terminal equipment.
The advantageous effects of the third aspect can be seen from the description of the first aspect.
In one possible design, the sampling data is data obtained by performing video frame extraction on video data shot by the terminal device, or the sampling data is picture data shot by the terminal device.
In one possible design, the processing module is specifically configured to: determining a plurality of evaluation indexes in an algorithm evaluation system corresponding to the data type of the sampling data; executing each algorithm packet in the multiple algorithm packets on the sampled data by referring to multiple evaluation indexes, and acquiring the values of the multiple evaluation indexes corresponding to each algorithm packet; determining an algorithm evaluation value corresponding to each algorithm package according to the values of the evaluation indexes corresponding to each algorithm package, and taking the algorithm evaluation value corresponding to each algorithm package as an algorithm result corresponding to each algorithm package; and determining the algorithm packet with the highest algorithm evaluation value in the multiple algorithm packets as a target algorithm packet according to the algorithm evaluation value corresponding to each algorithm packet.
In one possible design, the transceiver module is further configured to: and sending a target algorithm packet and an algorithm analysis task to the first edge domain node, wherein the algorithm analysis task is used for indicating the first edge domain node to analyze the data received from the terminal equipment through the target algorithm packet.
In one possible design, the processing module is further to: determining residual computing power respectively corresponding to a plurality of edge domain nodes managed by a cloud center, wherein the plurality of edge domain nodes comprise a first edge domain node; determining a second edge domain node according to the residual computing power respectively corresponding to the edge domain nodes, wherein the second edge domain node is the edge domain node with the highest residual computing power in the edge domain nodes; the transceiver module is further configured to: and sending a target algorithm packet and an algorithm analysis task to the second edge domain node, wherein the algorithm analysis task is used for indicating the second edge domain node to analyze the data collected by the terminal equipment and received from the first edge domain node through the target algorithm packet.
In one possible design, the processing module is specifically configured to: and determining the residual computing power corresponding to each edge domain node according to the hardware resource performance, the data access amount and the number of the accessed terminal devices corresponding to each edge domain node in the plurality of edge domain nodes.
In a fourth aspect, an edge domain node is provided, which includes: the communication module is used for sending the sampling data to the cloud center; the receiving and sending module is further used for receiving a first target algorithm packet sent by the cloud center, wherein the first target algorithm packet is determined by an algorithm result obtained by the cloud center executing each algorithm packet in the multiple algorithm packets on the sampled data and corresponding to each algorithm packet; the multiple algorithm packages are determined by the cloud center according to the data types of the sampled data; the type of the sampling data is used for reflecting a scene when the terminal equipment acquires the sampling data; and the processing module is used for loading the first target algorithm package so as to analyze the data received from the terminal equipment through the first target algorithm package.
The advantageous effects of the fourth aspect can be seen in the description of the second aspect.
In one possible design, the sampling data is data obtained by performing video frame extraction on video data shot by the terminal device, or the sampling data is picture data shot by the terminal device.
In one possible design, the processing module is specifically configured to: determining whether the edge domain node is locally loaded with a first target algorithm package;
and if the edge domain node is determined not to be loaded with the first target algorithm package locally, loading the first target algorithm package so as to analyze the data received from the terminal equipment through the first target algorithm package.
In one possible design, the communication module is further configured to receive a second target algorithm packet and an algorithm analysis task sent by the cloud center, the second target algorithm packet is determined by the cloud center based on sampling data sent by other edge domain nodes, and the algorithm analysis task includes information of the terminal device to be analyzed; the processing module is also used for acquiring data acquired by the terminal equipment corresponding to the terminal equipment information to be analyzed from other edge domain nodes; and the processing module is also used for loading a second target algorithm packet so as to analyze the data received from other edge nodes through the second target algorithm packet.
In a fifth aspect, a cloud center is provided, the cloud center comprising: a memory and a processor. The memory is coupled to the processor. The memory is for storing computer program code comprising computer instructions. The transceiver is used for receiving data and transmitting data. The computer instructions, when executed by a processor, cause the cloud centre to perform a method of any one of the scenario based algorithm configurations as provided by the first aspect or its respective possible design.
In a sixth aspect, an edge domain node is provided, which includes: a memory and a processor. The memory is coupled to the processor. The memory is for storing computer program code comprising computer instructions. The transceiver is used for receiving data and transmitting data. The computer instructions, when executed by a processor, cause the edge domain node to perform any of the scenario-based algorithm configuration methods as provided by the second aspect or its corresponding possible design.
In a seventh aspect, the present application provides a chip system, where the chip system is applied to a cloud center. The system-on-chip includes one or more interface circuits, and one or more processors. The interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the cloud center and to send the signal to the processor, the signal including computer instructions stored in the memory. When the processor executes the computer instructions, the cloud center performs a scenario-based algorithm configuration method as provided by the first aspect or its respective possible design.
In an eighth aspect, the present application provides a chip system, which is applied to an edge domain node. The system-on-chip includes one or more interface circuits, and one or more processors. The interface circuit and the processor are interconnected through a line; the interface circuit is configured to receive a signal from a memory of the edge domain node and to send the signal to the processor, the signal comprising computer instructions stored in the memory. When the processor executes the computer instructions, the edge domain nodes perform a scenario-based algorithm configuration method as provided by the second aspect or its respective possible design.
In a ninth aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform a method as provided by the first aspect or a corresponding possible design thereof.
In a tenth aspect, the present application provides a computer-readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform a method as provided by the second aspect or its respective possible design
In an eleventh aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform a method as provided by the first aspect or its corresponding possible design.
In a twelfth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the method as provided by the second aspect or its corresponding possible design.
It is understood that any one of the cloud center, the edge domain node, the chip system, the computer readable storage medium, or the computer program product provided above may be applied to the corresponding method provided above, and therefore, the beneficial effects achieved by the method may refer to the beneficial effects in the corresponding method, and are not described herein again.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a scene-based algorithm configuration method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for selecting edge domain nodes according to an embodiment of the present disclosure;
fig. 4 is a software framework schematic diagram of a cloud center and a software framework schematic diagram of each node in an edge domain according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for managing a terminal device by a cloud center according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a method for managing a terminal device by a node in an edge domain according to an embodiment of the present application;
fig. 7 is a schematic diagram of a possible composition of a server in a cloud center according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a possible composition of a server in a cloud center and a server in an edge domain according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a possible composition of an edge domain node according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The method and the device for managing the edge domain nodes can be applied to a scene that the cloud center in the Internet of things manages the edge domain nodes, for example, the method and the device can be applied to a scene that the edge domain nodes analyze video data or picture data.
In a scene that a cloud center management edge domain node analyzes video data or picture data, aiming at the problem that when the current edge domain node executes video analysis, the optimal algorithm determined by a static optimal algorithm determination method cannot meet the requirement of a user on the scene analysis in real time, the application provides a scene-based algorithm configuration method, so that the edge domain node can support the integrated extension of the algorithm. The edge domain nodes can send the sampled data to the cloud center, the cloud center can determine various algorithm packages corresponding to the data types of the sampled data, and the data types of the sampled data are used for reflecting scenes when the terminal equipment obtains the sampled data. The cloud center executes each algorithm package in the multiple algorithm packages on the sampled data, and obtains a target algorithm package by comparing algorithm results obtained by executing the multiple algorithm packages. The target algorithm package can be understood as an optimal algorithm package in a plurality of algorithm packages obtained through comparison. Therefore, the type of the sampling data can reflect the scene when the terminal device acquires the sampling data, so that when the scene when the terminal device acquires the sampling data changes, the data type of the sampling data determined by the cloud center also changes along with the scene, the multiple algorithm packages determined by the cloud center also change along with the data type, and the target algorithm package in the multiple algorithm packages finally determined by the cloud center from the multiple algorithm packages also changes.
Therefore, in the application, the target algorithm packet decided by the cloud center may be different according to different scenes when the terminal device acquires data. The target algorithm packet obtained by cloud center decision can meet the real-time scene analysis requirements of the user, and user experience is improved. In addition, in the application, when the cloud center decides the target algorithm package, the cloud center does not need to depend on the number and types of video equipment except the sampled data, equipment application additional information of the video equipment and the like to participate in the decision, and the implementation complexity is low.
It should be noted that the algorithms referred to in the present application, such as multiple algorithm packages, each algorithm package, and the target algorithm package, may be applied to not only the calculation of media data but also the calculation of non-media data. That is, the data referred to in the present application may be media data or non-media data. When media data is used, the sampling data in the present application can be understood as media sampling data; when non-media data, the sample data of the present application may be understood as non-media sample data.
For example, the media data or media sample data may be video data or picture data acquired by a camera.
The non-media sample data may be, for example, intelligent structured-oriented data or sensing-type data other than a non-camera. Examples include smoke sensor data, door magnetic switch sensing data, water level sensor data, temperature and humidity sensor data, natural gas sensor data, noise sensor data, wind speed sensor data, ambient light sensor data, sunlight sensor data, infrared light sensor data, and ultraviolet light sensor data.
By applying the scene-based algorithm configuration method provided by the application, the method can be applied to a network architecture as shown in fig. 1, and the network architecture can be applied to the internet of things. The network architecture may include a cloud center, a plurality of edge domain nodes in an edge domain (e.g., edge domain node 1, edge domain node 2, …, edge domain node n in fig. 1), and a plurality of end devices (end device 1, end device 2, …, end device m) managed by each edge domain node. m and n are integers greater than 1.
In the application, the cloud center, which may be referred to as "cloud" for short, may be a public cloud center facing an internet scene, or may be a private cloud center facing an intranet/private network; the cloud center focuses on service data fusion and big data multidimensional analysis application. It should be noted that the function of the cloud center may be implemented by one device, or may be implemented by multiple devices together, which is not limited in this embodiment of the present application. It should be understood that, in the embodiment of the present application, the steps performed by the cloud center are specifically performed by one or more devices in the cloud center.
The edge domain, which may be referred to as "edge" for short, may be represented by a logical position relative to a cloud center, which may be referred to as an edge domain hereinafter, and a communication network may be located between the edge domain and the cloud center. The edge domain focuses on sensing data aggregation, storage, processing, intelligent application and the like. In this application, the edge domain may include a plurality of edge domain nodes, and each edge domain node may be configured to perform algorithm analysis on data sent by a plurality of terminal devices managed by the edge domain node.
And the terminal equipment focuses on the acquisition of multi-dimensional sensing data and front-end intelligent processing. For example, the terminal device may be a camera, a sensing device, a smart device, and the like, such as an enhanced camera, a smart camera, an internet of things sensing device (e.g., a Global Positioning System (GPS), a Wireless-Fidelity (Wifi) probe, a door security product, a smart helmet, an environment monitoring and bayonet mobile terminal, and the like), an industrial smart device, a robot, a drone, and the like. For the camera, the camera can monitor the environment, people, vehicles, objects and the like in a view angle range, and transmits the video data acquired by monitoring to the edge domain node for algorithm analysis.
That is, in the network architecture, data can be sent from the terminal device to the edge domain, and "edge clustering to domain" can be realized; data entering into the cloud can be realized from the edge domain to the cloud center. The edge domain and the cloud center can be classified into multiple levels and types, and the format and the content of data gathered by the edge domain and data transmitted to the cloud center can be different according to different applications.
With the network architecture, the method for configuring the algorithm based on the scenario provided by the present application may refer to fig. 2, and the method includes:
201. and the terminal equipment acquires the sampled data and sends the sampled data to the first edge domain node. Namely, the first edge domain node receives the sampled data sent by the terminal device.
In some embodiments, the sampled data may be obtained by sampling data acquired in real time by the terminal device, and the terminal device sends the sampled data to the first edge domain node.
In some embodiments, the sampled data may also be obtained by sending, by the terminal device, data acquired in real time to the first edge domain node and sampling, by the first edge domain node, the data received in real time.
For example, when the terminal device is a camera, the sample data may be understood as media sample data, for example, the sample data is data obtained by performing video frame extraction on video data shot by the terminal device, or the sample data is picture data that is captured by the camera in real time.
When the sampling data is video data, the sampling data can be obtained by performing video frame extraction on the video data obtained by real-time shooting by a camera, and then the camera sends the sampling data to the first edge domain node; or, the sampling data may be obtained by sending video data obtained by real-time shooting by the camera to the first edge domain node and performing video frame extraction on the received video data by the first edge domain node.
It will be appreciated that the sampled data may be timed, or otherwise periodically, by the end device or the first edge domain node.
It is to be understood that when the terminal device is a camera, the video data or picture data may be data corresponding to a scene within a range of viewing angles captured by the camera. For example, objects in the scene include pedestrians, vehicles, animals, buildings, signs, flowers, trees, and the like.
202. And the first edge domain node sends the sampling data to the cloud center. Namely, the cloud center receives the sampling data sent by the first edge domain node.
When the first edge domain node receives the sampling data periodically or periodically, the received sampling data can be sent to the cloud center for analysis.
203. The cloud center determines a plurality of algorithm packages corresponding to the data types of the sampling data, and the data types of the sampling data are used for reflecting scenes when the terminal equipment acquires the sampling data.
When the cloud center receives the sampled data, the data type of the sampled data may be determined first.
In some embodiments, for example, the terminal device is a camera, which is located at the door of the school and can shoot a scene around the road at the door of the school within the range of the viewing angle. Taking the sample data as the image data as an example, the cloud center may determine the number of people in the image data through a face detection algorithm. When the number of students and parents at the door of the school is large in the school time period, for example, when a certain threshold is reached, the cloud center may consider that the people gather in the scene corresponding to the picture data, and therefore, the cloud center may determine that the type of the picture data is the people gather type. That is, the people group type may reflect that people group in the scene of the camera when the camera acquires the picture data.
When the cloud center determines that the data type of the picture data is the people gathering type, the multiple algorithm packages corresponding to the people gathering type can be, for example, multiple people strenuous movement detection algorithms, and each people strenuous movement detection algorithm can be used for detecting whether a person chasing situation or a person alarming situation possibly exists or not.
Alternatively, the plurality of algorithm packages corresponding to the person gathering type may be, for example, a plurality of gait detection algorithms. The gait detection algorithm is a non-contact biometric identification technique. For example, pedestrian analysis can be performed by using a consensus algorithm in combination with prior knowledge such as a human body model, a motion model, motion constraint and the like. Then, the angle change trajectory of the main joints of the human body is obtained from the analysis result. These traces are normalized by structure and time and used as dynamic features for identification. For example wanted persons can be identified from the population.
It should be appreciated that there are multiple human strenuous movement detection algorithms, or multiple gait detection algorithms, in the cloud center due to the different algorithms that different algorithm developers design.
In addition, various algorithm packages corresponding to the personnel gathering types can also be used for identifying and comparing algorithms for various face pictures, identifying special personnel from crowds and the like; or the multiple algorithm packages corresponding to the people gathering type can also be multiple face detection algorithms, for example, to detect the positions of people in the crowd in the scene, and the like.
In some embodiments, continuing to take the above-mentioned camera shooting of the scene around the school entrance road as an example, when the number of people at the school entrance is less detected by the cloud center during the non-school entrance and exit period, and the number of passing vehicles is greater, the cloud center may consider that there are more vehicles in the scene corresponding to the picture data. Accordingly, the cloud center may determine that the type of the picture data is a vehicle aggregation type. That is, the vehicle aggregation type may reflect that a vehicle aggregation occurs in the scene of the camera when the camera acquires the picture data.
When the cloud center determines that the data type of the image data is the vehicle aggregation type, the multiple algorithm packages corresponding to the vehicle aggregation type may be multiple solid-line lane change detection algorithms, for example, and each solid-line lane change detection algorithm may be used to detect whether a vehicle crosses a solid-line lane change condition, so as to obtain a driving condition that does not comply with traffic rules.
Similarly, the cloud center has a plurality of solid lane change detection algorithms, because different algorithms are designed by different algorithm developers.
When the cloud center analyzes the sample data, it may be assumed that the service configurations of the nodes in the cloud center are close and the calculation power is close. The service configurations are similar, for example, it can be understood that each node of the edge domain is a server, and hardware parameters such as a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU) of each server are the same; the calculation power is similar, for example, the calculation power of the CPU is similar to that of the GPU.
204. And the cloud center executes each algorithm packet in the multiple algorithm packets on the sampled data to obtain an algorithm result corresponding to each algorithm packet.
In some embodiments, the cloud center may evaluate each algorithm package of a plurality of algorithm packages (algorithm sets) determined by the cloud center through an algorithm evaluation system corresponding to a data type of the sampled data to determine an algorithm package with a highest evaluation value among the plurality of algorithm packages.
In some embodiments, specifically, the cloud center may first determine a plurality of evaluation indexes in an algorithm evaluation system corresponding to the data type of the sampled data, and calculate the sampled data through each algorithm package of the plurality of algorithm packages with reference to the plurality of evaluation indexes to obtain values of the plurality of evaluation indexes corresponding to each algorithm package. And the cloud center determines the algorithm evaluation value corresponding to each algorithm package according to the values of the evaluation indexes corresponding to each algorithm package, and takes the algorithm evaluation value corresponding to each algorithm package as the algorithm result corresponding to each algorithm package.
For example, it is assumed that the data type of the sample data is a member aggregation type, and therefore, after the cloud center determines that the plurality of algorithm packages corresponding to the type are the plurality of face detection algorithm packages through step 203, the cloud center may determine a plurality of evaluation indexes in the algorithm evaluation system corresponding to the type first. It can be understood that, when the algorithm evaluation system is a face detection algorithm evaluation system, the face detection aims at finding out the positions corresponding to all faces in the picture, and the output of the algorithm is the coordinates of a face circumscribed rectangle in the picture, and may also include information such as a posture, an inclination angle and the like. At this time, the multiple evaluation indexes for evaluating the quality of the algorithm for face detection may include multiple indexes such as a detection rate, a false alarm rate, a frame rate (FPS), and an Intersection-over-Union (IoU).
Wherein, the detection rate can be understood as: the number of detected faces/the number of all faces in the picture, the number of detected faces can be understood as the number of correctly detected faces, and the number of all faces in the picture can be understood as the total number of faces to be detected.
The false alarm rate is understood as: number of false positives/number of all non-face scanning windows in the picture.
FPS can be understood as the number of pictures a second can be processed by the face detection algorithm, i.e. the FPS is used to evaluate the speed of the face detection algorithm.
IoU can be understood as the overlapping rate of the candidate frame (candidate frame) and the original mark frame (ground route frame), i.e. the ratio of their intersection to union, in the locating frame (bounding box) of the face. The optimal situation is complete overlap, i.e. a ratio of 1.
Therefore, with reference to these four evaluation indexes, the calculation of the sample data may be performed by each of the plurality of face detection algorithm packages (face detection algorithm sets), and the values of the four evaluation indexes corresponding to each of the face detection algorithm packages may be obtained. And the cloud center determines an algorithm evaluation value corresponding to each face detection algorithm package according to the values of the four evaluation indexes corresponding to each face detection algorithm package, and takes the algorithm evaluation value corresponding to each face detection algorithm package as an algorithm result corresponding to each algorithm package.
The determination of the algorithm evaluation value corresponding to each face detection algorithm package according to the values of the four evaluation indexes corresponding to each face detection algorithm package can be understood as referring to the scoring standards corresponding to the four evaluation indexes respectively, scoring each evaluation index of the four evaluation indexes of each face detection algorithm to obtain a comprehensive scoring value of each face detection algorithm, and taking the comprehensive scoring value as the algorithm evaluation value corresponding to the face detection algorithm package.
For another example, when the plurality of algorithm packages are a plurality of face image recognition and comparison algorithms, the evaluation indexes corresponding to the plurality of face image recognition and comparison algorithm packages may include a False Rejection Rate (FRR) and a False Acceptance Rate (FAR). FRR can be understood as the proportion of error rejection cases in all similar matching cases. For example, if two samples are homogeneous (same person) but are mistaken by the system for heterogeneous (non-same person), then this is a false reject case. The FAR can be understood as the proportion of false acceptance cases in all heterogeneous matching cases. For example, if two samples are heterogeneous (not the same person), but are mistaken by the system as homogeneous (the same person), then this is a false acceptance case. Similar to the face detection algorithm package, the implementation manner of obtaining the analysis result corresponding to each face recognition algorithm package by analyzing the picture data using a plurality of face recognition algorithm packages may refer to the implementation manner of the face detection algorithm package.
205. And the cloud center determines a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package.
It can be understood that the target algorithm package is used for analyzing the data collected by the terminal device by the edge domain node. For example, when the target algorithm package is a person violently running the detection algorithm, it can be used to analyze the person in the video captured by the camera to determine whether there is intense motion of the person.
In some embodiments, the cloud center determining the target algorithm package from the plurality of algorithm packages according to the algorithm result corresponding to each algorithm package may include:
and the cloud center determines the algorithm packet with the highest algorithm evaluation value in the multiple algorithm packets as a target algorithm packet according to the algorithm evaluation value corresponding to each algorithm packet.
For example, when the comprehensive score value of each face detection algorithm is obtained according to the scoring criteria for the face detection algorithm package, the face detection algorithm with the highest comprehensive score value may be used as the target face detection algorithm. Compared with other algorithms in the set of the face detection algorithm packet, the face detection algorithm with the highest comprehensive score value is more suitable for analyzing the image data shot in the current scene, namely, the target face detection algorithm is more accurate in face detection of the current scene, and is more beneficial to the edge domain node to execute the face detection algorithm analysis task in the current shooting scene of the camera.
206. And the cloud center sends the target algorithm packet and the algorithm analysis task to the first edge domain node. Namely, the first edge domain node receives a target algorithm packet and an algorithm analysis task sent by the cloud center.
The target algorithm package may be understood as an algorithm framework used by the first edge domain node to execute the algorithm analysis task. The algorithm analysis task may be understood as a command instructing the first edge domain node to execute the target algorithm package, that is, instructing the first edge domain node to analyze data received from the terminal device through the target algorithm package.
For example, the cloud center may send an algorithm integration instruction to the first edge domain node, where the algorithm integration instruction includes a target algorithm package (e.g., a target face detection algorithm package) or an algorithm analysis task, and the target algorithm package or the algorithm analysis task may include an identifier of the target algorithm package, where the identifier may include, for example, a version number or a sequence number of the target algorithm package. Alternatively, the identification of the target algorithm package is indicated in a field outside of the target algorithm package and the algorithm analysis task.
207. And the first edge domain node loads the target algorithm packet so as to analyze the data received from the terminal equipment through the target algorithm packet.
In some embodiments, it is possible that the target algorithm package determined by the cloud center through the algorithm comparison is already the algorithm package currently running by the first edge domain node, and then the first edge domain node does not need to load the target algorithm package again at this time.
Illustratively, when a first edge domain node receives an algorithm integration instruction, the first edge domain node determines whether the first edge domain node has locally loaded a target algorithm package; and if the first edge domain node is determined not to be locally loaded with the target algorithm package, the first edge domain node loads the target algorithm package so as to analyze the data received from the terminal equipment through the target algorithm package. For example, the first edge domain node may determine whether the target algorithm packet is already loaded and run locally at the first edge domain node according to fields such as a version number or a sequence number of the target algorithm packet, and if it is determined that the target algorithm packet is already loaded and run, the first edge domain node does not perform processing. If it is determined that a run is not loaded, the first edge domain node may load the received target algorithm package to analyze the data received from the end device using the target algorithm package.
It can be understood that, in the present application, the data received from the terminal device is scene data obtained by the first edge domain node in real time from the terminal device received by the terminal device. The sampling data is obtained by periodically sampling real-time scene data, such as the data subjected to video frame extraction or the captured picture data, by the terminal device or the first edge domain node.
For example, the algorithm analysis task may be understood as that the first edge domain node performs algorithm analysis on video data captured by the camera by using the face image recognition comparison algorithm package to determine a face within a view angle range of the camera, and the like. The face image recognition comparison algorithm package can be applied to aspects of security monitoring, testimony comparison, human-computer interaction, social contact, entertainment and the like.
Therefore, according to the scene-based algorithm configuration method, the edge domain nodes can report the sampled data acquired from the terminal equipment, the cloud center performs algorithm comparison of various algorithm packages on the sampled data, and an optimal algorithm package, namely a target algorithm package, is decided. The type of the sampling data can reflect the scene when the terminal device acquires the sampling data, so that when the scene when the terminal device acquires the sampling data changes, the data type of the sampling data determined by the cloud center also changes along with the scene, the multiple algorithm packages determined by the cloud center also change along with the data type, and the target algorithm package in the multiple algorithm packages finally determined by the cloud center from the multiple algorithm packages also changes. Therefore, in the application, the target algorithm packet decided by the cloud center can be different according to different scenes when the terminal device acquires the sampling data, so that the requirements of a user on different scene analysis in real time are met, and the user experience is improved. In addition, in the application, when the cloud center decides the optimal algorithm packet, the number and the type of the video devices except the sampled data and the device application additional information of the video devices in the prior art do not need to be relied on to participate in the decision, so that the implementation complexity is low.
In addition, it can be understood that, among a plurality of edge domain nodes in an edge domain managed by a cloud center, there may be a difference in the algorithmic analysis tasks performed by each edge domain node, and then the remaining computation power of each edge domain node may also be different. Therefore, in consideration of the difference in the residual computation power of the edge domain nodes, before performing step 206, the cloud center may estimate the residual computation power of each edge domain node in the next edge domain nodes to determine the edge domain node with the highest residual computation power, and issue the target algorithm package to the edge domain node with the highest residual computation power. It is to be understood that the edge domain node with the most residual computation power may be the first edge domain node, and may also be another edge domain node other than the first edge domain node.
Therefore, as shown in fig. 3, in order to select the edge domain node, after step 205, step 206 may not be performed, but step 208 may be performed.
208. The cloud center determines residual computing power respectively corresponding to a plurality of edge domain nodes managed by the cloud center, wherein the edge domain nodes comprise a first edge domain node.
In some embodiments, the cloud center may determine the remaining computation power corresponding to each edge domain node according to the hardware resource performance, the data access amount, and the number of terminal devices accessed corresponding to each node of the plurality of edge domain nodes.
Illustratively, for any edge domain node, its hardware resource performance VserverWhich can be understood as the residual power of the CPU and GPU of the edge domain node. It is understood that if the CPU and/or GPU in the edge domain node does not support the target algorithm operation, the residual computation power V of the edge domain node can be directly determinedcomputerIs 0;
data access volume VdataThe method can be used for reflecting the real-time algorithm execution total number of the edge domain nodes, or reflecting the real-time algorithm execution situation. I.e. the data access volume V of the edge domain nodedataThe larger the size, the more computationally intensive the edge domain node is.
Number V of terminal devices accessed by edge domain nodesdeviceIt can be understood that, in general, the more the number of terminal devices that are valid in real time by the edge domain node is, the less the residual resources are calculated by the edge domain node.
Calculating the residual computing power V of the edge domain nodes by combining the three indexescomputerCan be expressed as:
Vcomputer=f(Vserverserver+Vdatadata+Vdevicedevice)
wherein f () represents calculating the residual computing power VcomputerFormula (b) ofserverRepresenting hardware resource performance VserverWeight coefficient of (1), betadataIndicating data access volume VdataWeight coefficient of (1), betadeviceNumber V of terminal equipment for representing edge domain node accessdeviceThe weight coefficient of (2).
It can be understood that the residual computing force VcomputerThe larger the value of (b), the smaller the residual computation power of the edge domain node.
209. And the cloud center determines a second edge domain node according to the residual computing power respectively corresponding to the edge domain nodes, wherein the second edge domain node is the edge domain node with the highest residual computing power in the edge domain nodes.
For example, the cloud center may arrange the residual computation forces corresponding to the edge domain nodes in order of magnitude, and take VcomputerThe edge domain node with the smallest value is the second edge domain node, i.e. the second edge domain node is considered to have the largest residual computational power. In this case, the second edge domain node may be the first edge domain node or may be another edge domain node other than the first edge domain node.
If the cloud center determines that the second edge domain node is the first edge domain node, after step 209, step 206 may be continued. If the cloud center determines that the second edge domain node is not the first edge domain node, i.e., not an edge domain node that sent the sampled data to the cloud center, then step 210 may continue.
210. And the cloud center sends the target algorithm packet and the algorithm analysis task to the second edge domain node. Namely, the second edge domain node receives the target algorithm packet and the algorithm analysis task sent by the cloud center.
In step 210, the algorithmic analysis task may include address information of the first edge domain node.
Similar to step 206, when the second edge domain node receives the target algorithm package and the algorithm analysis task, it may also be determined first whether the second edge domain node has loaded the algorithm frame running the target algorithm package locally, and the determination manner may refer to step 206.
211. And the second edge domain node loads the target algorithm packet so as to analyze the data received from the terminal equipment through the target algorithm packet.
The specific implementation of step 211 can be seen in the implementation manner of the first edge domain node in step 207.
It can be understood that, when the first edge domain node (or the second edge domain node) is assumed to currently store the first target algorithm package and the corresponding algorithm analysis task, it may also receive the second target algorithm package and the corresponding algorithm analysis task sent by the cloud center, where the second target algorithm package is determined by the cloud center based on the sampling data sent by other edge domain nodes, and the algorithm analysis task includes information of the terminal device to be analyzed; at this time, the first edge domain node (or the second edge domain node) may obtain data, which is acquired by the terminal device and corresponds to the terminal device information to be analyzed, from other edge domain nodes; the first edge domain node (or the second edge domain node) loads the second target algorithm package so as to analyze the data received from other edge nodes through the second target algorithm package.
Therefore, when the cloud center sends the target algorithm package to the edge domain nodes, the residual computing power of each edge domain node can be compared firstly, the edge domain node with the most residual computing power is selected to execute the algorithm analysis task of the target algorithm package, and the reasonable use of the computing power of the edge domain nodes can be achieved.
In addition, in the internet of things, when a new terminal device is accessed to the edge domain node, an adapter corresponding to the new terminal device can be created in the edge domain node; or when the terminal device communicating with the edge domain node changes, the adapter corresponding to the terminal device may be updated in the edge domain node. The adapter here can be understood as a device access driver for accessing the terminal device to the edge domain node. However, in this case, only access adaptation to the terminal device is embodied, and management of access to the terminal device is not embodied.
Therefore, in the application, a software module for managing the access of the terminal device to the edge domain node may be added to the cloud center, and a software module for managing the access of the terminal device to the edge domain node may be added to each edge domain node in the edge domain. The user can add the terminal device in the cloud center and execute the operation of binding the edge domain node by the terminal device. Or, the user may also add the terminal device to the edge domain node and perform an operation of binding the edge domain node by the terminal device, so as to implement management when the cloud center or the edge domain node accesses the terminal device.
As shown in fig. 4, in the present application, a software framework of a cloud center may include an edge management module, a device model repository, a device access driver repository, an algorithm repository, and an edge application repository.
The software framework of each edge domain node in the edge domain may include an instrumented edge services module, which may include an edge proxy and an instrumented edge services framework that supports extensions, which may include a device management framework, an application integration framework, a device access framework, and an algorithm integration framework.
The module and the warehouse may be understood as a software module.
The edge management module may be configured to manage each edge domain node in the edge domain, for example, an edge agent in the edge domain node may register with the edge management module, and when the registration is successful, the edge management module may issue a model in an equipment model repository to the registered node or may access a drive in a drive repository.
And the equipment model warehouse can be used for storing basic information of the terminal equipment, an operation interface capable of being provided for the terminal equipment, an alarm function and the like. Can be understood as being used for digitally modelling the terminal equipment.
And the equipment access driving warehouse can be used for supporting the terminal equipment to be accessed to the edge domain node.
And the algorithm warehouse can be used for storing various algorithm packages, and the various algorithm packages can be applied to the edge domain nodes.
An edge application repository may be understood as an application repository for deployment in an edge domain node. For example, the application may be a monitoring system, an access control system, and the like. The application can be used as a data display window on the edge domain node, and the terminal device access information, the algorithm integration frame and other contents can be displayed on the application window.
The edge proxy in the edge domain may be configured to support cloud-edge communication, for example, the edge proxy may register with an edge management module in the cloud center, and after the registration is successful, the edge management module may communicate with the edge proxy.
The device management framework corresponds to the device model warehouse in the cloud center, and can be used for loading the device models in the device model warehouse to generate device control, device configuration, device add/delete forms and interfaces for device management, and the like.
And the application integration framework corresponds to the edge application warehouse in the cloud center and can be used for integrating the applications in the edge application warehouse and loading and running the applications.
And the equipment access framework corresponds to the equipment access drive warehouse in the cloud center and can be used for loading the equipment access drive and running.
And the algorithm integration framework corresponds to an algorithm warehouse in the cloud center, can be used for integrating the algorithm framework, and loads the algorithm for operation, namely, executes an algorithm analysis task. For example, an algorithmic analysis task may be understood as the above-described face detection.
It should be noted that, with the software framework of the cloud center and the edge domain nodes provided in the present application, the step 202 may be sampling data sent by the edge proxy in the first edge domain node to the edge management module of the cloud center; step 203 can be realized by the edge management module in the cloud center according to a plurality of algorithm packages corresponding to the types of the sampled data in the algorithm warehouse; step 204 may be an algorithm analysis process in which an edge management module in the cloud center triggers execution of multiple algorithm packages; step 205 may be that the edge management module in the cloud center determines a target algorithm package according to an analysis result corresponding to each algorithm package; step 206 may be a target algorithm package and an algorithm analysis task sent by the edge management module in the cloud center to the first edge domain node; step 207 may be that the internet of things edge service module in the first edge domain node triggers an algorithm integration framework to integrate a target algorithm package, and performs an algorithm analysis task on a data type corresponding to the sample data through the target algorithm package.
The steps 208 to 210 can be executed by an edge management module in the cloud center, and the step 211 can be executed by an algorithm integration framework in the edge domain node.
The software framework with the cloud center and the edge domain nodes is applicable to the following two scenes that terminal equipment is accessed to the edge domain.
Scene 1: assuming that the edge proxies in the edge domain nodes A, B and C of the edge domain are registered with the edge management module of the cloud center, the edge management module determines that edge domain nodes A, B and C exist. Assuming that the user determines that the terminal device D is closer to the edge domain node a, or the user determines that the calculation power of the edge domain node a is stronger, when the user adds the terminal device D in the edge management module of the cloud center and adds the terminal device D in the edge management module of the cloud center to bind the edge domain node a for operation, as shown in fig. 5, the process of managing the terminal device D by the cloud center may include:
501. and the cloud center sends the basic information of the terminal device D to the edge domain node A. Namely, the edge domain node a receives the basic information of the terminal device D sent by the cloud center.
An edge management module in the cloud center may send basic information of the terminal device D to an edge proxy in the edge domain node a.
Illustratively, the basic information may include an Internet Protocol (IP) port of the terminal device D, a user name, a user password, a device name, a Protocol type for communication, and the like.
The user adds the terminal device D in the edge management module of the cloud center, which can be understood as adding the basic information of the terminal device D in the edge management module.
502. The edge domain node a determines whether the edge domain node a has a capability of accessing and managing the device type of the terminal device D, if it is determined that the capability exists, step 508 is performed, and if it is determined that the capability does not exist, step 503 is performed.
After receiving the basic information of the terminal device D, the edge agent of the edge domain node a may synchronize the basic information to the edge service module of the edge domain node a, and the edge service module of the internet of things may identify whether the edge domain node a has a capability of accessing and managing the device type of the terminal device D according to the basic information. For example, when the physical edge service module determines that the device access framework includes the device access driver of the terminal device D and the device management framework includes the device model of the terminal device D, it is determined that the edge domain node a supports accessing and managing the terminal device D.
Illustratively, the terminal device D is a camera, and the edge domain node a may determine whether there is a capability to access and manage camera type devices.
503. And the edge domain node A requests the cloud center to download the equipment model and the equipment access drive of the terminal equipment D. Namely, the cloud center receives a request for downloading the device model of the terminal device D and a request for accessing the device driver, which are sent by the edge domain node a.
When the edge service module of the internet of things in the edge domain node a determines that the terminal device D is supported to be accessed and managed, the edge service module of the internet of things may instruct the edge agent to send a request message to the edge management module of the cloud center, where the request message is used to request to download the device model and the device access driver of the terminal device D.
Wherein the request message may include basic information of the terminal device D.
504. The cloud center determines whether the device model of the terminal device D exists locally, if so, step 505 is executed, and if not, a response without a matching model is returned to the edge domain node a.
When the edge management module in the cloud center receives the request message, whether the equipment model corresponding to the basic information exists in the equipment model warehouse of the cloud center can be determined according to the basic information.
When the equipment model corresponding to the basic information does not exist in the equipment model warehouse, the edge service module of the cloud center has a problem support function and can be used for problem feedback. For example, the edge service module may submit the basic information of the terminal device D and the device model definition application of the terminal device D to another server capable of performing problem feedback, so as to obtain the device model of the terminal device D, for use when a node subsequently requests to obtain the device model.
505. The cloud center determines whether a device access driver of the terminal device D exists locally, if so, step 506 is executed, and if not, a response of no matching driver is returned to the edge domain node a.
The edge management module of the cloud center may determine whether a device access driver corresponding to the basic information exists in a device access driver repository in the cloud center according to the basic information of the terminal device D.
When the device access drive corresponding to the basic information does not exist in the device access drive warehouse, the edge service module of the cloud center can also obtain the device access drive of the terminal device D through problem feedback, so that a subsequent node requests to obtain the device access drive.
506. And the cloud center sends the download link of the equipment model of the terminal equipment D and the download link of the equipment access drive to the edge domain node A. Namely, the node a receives the download link of the device model of the terminal device D and the download link of the device access driver sent by the cloud center.
In some embodiments, the edge management module of the cloud center may also directly obtain the device model of the terminal device D from the device model repository and send the device model to the edge agent of the edge domain node a, and directly obtain the device access driver of the terminal device D from the device access driver repository and send the device access driver to the edge agent of the edge domain node a.
In step 506, the cloud center sends the device model and the download link of the device access driver to the terminal device D, considering that the storage space of the cloud center may be limited, or when the cloud center directly obtains the device model from the device model repository and obtains the device access driver from the device access driver repository and returns the device access driver to the edge domain node a, an interaction blocking or a data loss may easily occur, so that only the link of the device model of the terminal device D may be stored in the device model repository of the cloud center, and only the link of the device access driver of the terminal device D may be stored in the device access driver repository.
507. And the edge domain node A downloads the equipment model and the equipment access drive according to the download link of the equipment model and the download link of the equipment access drive, and loads the equipment model and the equipment access drive.
When the edge agent of the edge domain node A can download the device model of the terminal device D according to the download link of the device model of the terminal device D, the edge agent in the edge domain node A can execute the loading of the device model in the edge domain node A; and when the edge agent of the edge domain node a can obtain the device access driver of the terminal device D by downloading according to the download link of the device access driver of the terminal device D, the edge agent in the edge domain node a can execute the loading of the device access driver in the edge domain node a.
For example, the edge proxy may instruct the device model framework to load the device model to generate a device management form of the terminal device D, where the device management form may include an add/delete check item of the terminal device D, a configuration item of the terminal device D, an event monitoring item, an operation event item, an operation attribute item, and the like. The edge proxy may also instruct the device access driver framework to complete installation and deployment of the device access driver of the terminal device D on the edge domain node a side.
In addition, the user may also complete device information in the device management form of the terminal device D, for example, basic information, connection information (for example, an IP address, a port, a user name, a password, and a supported protocol type of the terminal device D), tag information, location information, and the like of the terminal device D may be added to the device management form.
508. The edge domain node A acquires the device connection information of the edge domain node A and the terminal device D, and executes the process that the terminal device D accesses the edge domain node A.
The device access driver in the edge domain node a may obtain the device connection information of the terminal device D from the device management form, so as to communicate with the terminal device D according to the device connection information, so that the terminal device D is accessed to the edge domain node a.
Scene 2: assuming that the user determines that the distance between the terminal device D and the node B is short, or the user determines that the calculation power of the node B is strong, at this time, the user may also add the terminal device D in the edge-of-things service module of the node B, which means that the terminal device D is bound to the node B, as shown in fig. 6, a process of the node B managing the terminal device D may include:
601. the user adds basic information of the terminal device D in the edge domain node B.
For example, the user may add basic information of the terminal device D in the edge proxy of the edge domain node B. The content included in the basic information can be seen in step 501.
602. The edge domain node B determines whether the edge domain node B has a capability to access and manage the device type of the terminal device D, and if it is determined that the capability exists, performs step 608, and if it is determined that the capability does not exist, performs step 603.
The manner in which the edge domain node B performs step 602 can be seen in step 502.
603. And the edge domain node B requests the cloud center to download the equipment model and the equipment access drive of the terminal equipment D. Namely, the cloud center receives a request of the edge domain node B for downloading the device model of the terminal device D and a request of the device access driver.
The manner in which the edge domain node B performs step 603 can be seen in step 503.
604. The cloud center determines whether the device model of the terminal device D exists locally, if yes, step 605 is executed, and if no, a response without a matching model is returned to the edge domain node B.
The manner in which the cloud center performs step 604 may be seen in step 504.
605. The cloud center determines whether a device access driver of the terminal device D exists locally, if so, step 606 is executed, and if not, a response of no matching driver is returned to the edge domain node B.
The manner in which the cloud center performs step 605 may be seen in step 505.
606. And the cloud center sends the download link of the equipment model of the terminal equipment D and the download link of the equipment access drive to the edge domain node B. That is, the edge domain node B receives the download link of the device model of the cloud center sending terminal device D and the download link of the device access driver.
The manner in which the cloud center performs step 606 may be seen in step 506.
607. And the edge domain node B downloads the equipment model and the equipment access drive according to the download link of the equipment model and the download link of the equipment access drive, and loads the equipment model and the equipment access drive.
The manner in which the edge domain node B performs step 607 can be seen in step 507.
608. The node B acquires the device connection information of the edge domain node B and the terminal device D, and executes the process of accessing the terminal device D to the edge domain node B.
The manner in which the edge domain node B performs step 608 can be seen in step 508.
Therefore, in scene 1, basic information of the terminal device to be accessed to the edge domain node can be added to the cloud center, the cloud center feeds back the device model resource and the device access driving resource to the edge domain node, and after loading and running of the device model resource and the device access driving resource in the edge domain node are completed, the edge domain node can execute a process of accessing the terminal device to the node, so that management of the cloud center when the terminal device is accessed can be realized. In scenario 2, basic information of a terminal device to be accessed to an edge domain node may be added to the node, the edge domain node requests the cloud center to acquire a device model resource and a device access driving resource, and after the loading operation of the device model resource and the device access driving resource in the edge domain node is completed, the edge domain node may execute a process of accessing the terminal device to the edge domain node, so that management of the edge domain node when the terminal device is accessed may be achieved.
Similar to the scenario of adding and binding the terminal device, the user may also configure the application in the edge domain node in the edge application repository of the cloud center, and bind the application and the edge domain node. After the edge domain node receives the binding message, the application can be downloaded to an edge application warehouse of the cloud center through the node agent, the node agent executes installation and deployment of the application on the edge domain node side, and the deployed edge application runs in the edge domain node.
Oriented to scenario 1 and scenario 2, if the edge service module in the edge domain node a or the edge domain node B runs, once the edge domain node a or the edge domain node B needs to be completely unbound to the terminal device D, the node agent in the edge domain node a or the edge domain node B may unload the device access driver corresponding to the terminal device D, so that the resource occupation of the edge domain node a or the edge domain node B may be reduced.
For example, the node agent may scan the device access driver framework of the node (edge domain node a or edge domain node B), determine whether the device access driver framework of the terminal device D is still in communication with the terminal device D, and if it is determined that the communication connection has been disconnected, the node agent may uninstall the device access driver framework of the terminal device D.
The cloud edge cooperative equipment access management mechanism can enable the use of the edge domain node resources in the edge domain to be more reasonable, namely the resources are used when needed and released in time when not needed.
In some embodiments, in scene 1, that is, when a user adds a terminal device D in an edge management module of a cloud center, if any edge domain node in an edge domain supports access of the terminal device D, the cloud center may also recommend the user, or even help the user automatically determine a node object bound by the terminal device D.
For example, the recommendation or decision making by the edge management module in the cloud center to select an edge domain node may depend mainly on the following 3 metrics:
1) and accessing the number of the effective terminal devices of the edge domain nodes in real time.
In some embodiments, after each terminal device accesses an edge domain node, the edge domain node may notify an edge management module of the cloud center. Meanwhile, the edge domain node can also inform the edge management module of the cloud center of the number N of the online terminal devicesonlineDeviceAnd number of non-online terminal devices NotherDevice. In this way, the edge management module of the cloud center can determine the number N of online terminal devices of each edge domain nodeonlineDeviceAnd number of non-online terminal devices NotherDeviceDetermining the effective terminal equipment access number N of the node currently accessing the edge domaindevice
For example, for each edge domain node, its effective terminal equipment access number NdeviceCan be expressed as:
Ndevice=NonlineDeviceonline+NotherDeviceother
wherein 1 ═ αonlineother,αonlineNumber of terminal devices N representing presenceonlineDeviceRight of (1)Coefficient of weight, αotherRepresenting the number N of non-online terminal devicesotherDeviceThe weight coefficient of (2). Wherein alpha isonlineAt a higher weight value, αotherIs a smaller weight value.
It can be understood that the number of active terminal device accesses NdeviceThe fewer the number of corresponding edge domain nodes, the greater the probability that the corresponding edge domain node is called a recommended edge domain node.
2) Hardware resource availability of edge domain nodes.
In some embodiments, an edge proxy in an edge domain node may inform an edge management module of the cloud center about information of hardware resource usage of the edge domain node. The information related to the hardware resource usage rate may include: and indexes such as currently occupied bandwidth, CPU, memory, hard disk and the like of the edge domain node. The edge management module can calculate the hardware resource utilization rate of the node according to the related information of the hardware resource utilization rate.
For example, for each edge domain node, its hardware resource availability ratio VserverCan be expressed as:
Vserver=f(Vcpucpu+Vramram+Vdiskdisk+Vnetnet)
wherein 1 ═ αcpuramdisknet,VcpuIndicating the CPU usage of the node, alphaupuTo represent
VcpuThe weight coefficient of (2). VramIndicating the remaining resources, alpha, of memory use of the noderamRepresents VramWeight coefficient of (V)diskRepresenting the remaining disk capacity of the node, alphadiskRepresents VdiskThe weight coefficient of (2). VnetIndicating the bandwidth usage of the node, anetRepresents VnetThe weight coefficient of (2).
3) Network distance of edge domain node and terminal equipment to be accessed.
The network distance measurement may be performed by reflecting a relative distance between the edge domain node and the terminal device through a round-trip delay of a Transmission Control Protocol (TCP) three-way handshake performed when the edge domain node is connected to the terminal device. It can be understood that the relative distance is small, which indicates that the network communication quality between the edge domain node and the terminal device is good and more stable.
For example, the network distance of the edge domain node from the terminal device to be accessed
Figure BDA0003186629820000171
Can be expressed as:
Figure BDA0003186629820000172
wherein RTTiDenotes the ith round trip delay, and n is 3.
It can be understood that the smaller the value of the network distance, the better and more stable communication quality between the edge domain node and the terminal device is.
In some embodiments, the edge management module in the cloud center may comprehensively calculate the ranks of the edge domain nodes governed by the cloud center through the values of the three indexes, so as to determine the recommended edge domain nodes according to the ranks.
The algorithm for performing the comprehensive calculation may be various, such as a common algorithm such as normalization, weighted average, least square method, etc., or a more complex algorithm such as kalman filtering, extended kalman filtering, etc.
Therefore, when the cloud center determines the edge domain nodes recommended when the terminal equipment is accessed to the nodes by using the three indexes, the resources of each edge domain node can be more reasonably utilized, and the communication efficiency after the terminal equipment is accessed to the edge domain nodes is improved.
It is to be understood that, in order to implement the above functions, the cloud center or edge domain node includes corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, functional modules may be divided for the cloud center or the edge domain node according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 7 shows a schematic diagram of a possible composition of the server in the cloud center involved in the above embodiment, and as shown in fig. 7, the server 70 may include: a determining unit 701, an analyzing unit 702, a transmitting unit 703 and a receiving unit 704.
Wherein, determining unit 701 may be configured to support server 70 to perform step 203, step 205, step 208, step 209, step 504, step 505, step 604, step 605, etc., described above, and/or other processes for the techniques described herein.
Analysis unit 702 may be used to support server 70 performing steps 204, etc., described above, and/or other processes for the techniques described herein.
The sending unit 703 may be used to support the server 70 performing the above-described steps 206, 210, 501, 506, 606, etc., and/or other processes for the techniques described herein.
Receiving unit 704 may be used to support server 70 performing steps 202, 503, etc. described above, and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The server 70 provided in this embodiment is configured to execute the above-mentioned scene-based algorithm configuration method, so that the same effect as that of the above-mentioned implementation method can be achieved.
Where an integrated unit is employed, the server 70 may include a processing module, a storage module, and a communication module. The processing module may be configured to control and manage the operation of the server 70, and for example, may be configured to support the server 70 to execute the steps executed by the determining unit 701 and the analyzing unit 702. The memory module may be used to support the server 70 in storing program codes and data, etc. The communication module, which may include the sending unit 703 and the receiving unit 704 described above, may be used to support communication of the server 70 with other devices, for example, with edge domain nodes.
In the present application, the communication module and the processing module included in the server 70 may be distributed in one device, or may be distributed in different devices. The server 70 may be understood as a server cluster if distributed among different devices.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
In an embodiment, when the processing module is a processor, the storage module is a memory, and the communication module is a transceiver, the server 70 according to this embodiment may be a server 80 having a structure shown in fig. 8.
In the case of dividing each functional module by corresponding functions, fig. 9 shows a possible composition diagram of the edge domain node involved in the above embodiment, as shown in fig. 9, the edge domain node 90 may include: a receiving unit 901, a transmitting unit 902, an integrating unit 903 and a determining unit 904.
Among other things, the receiving unit 901 may be used to support the edge domain node 90 to perform the above-mentioned step 201, step 206, step 210, step 607, step 501, step 506, etc., and/or other processes for the techniques described herein.
The sending unit 902 may be configured to support the edge domain node 90 to perform the above-described steps 202, 503, 603, etc., and/or other processes for the techniques described herein.
The integration unit 903 may be used to support the edge domain node 90 to perform the above-described steps 207, 211, 507, etc., and/or other processes for the techniques described herein.
The determining unit 904 may be used to support the edge domain node 90 to perform the above-described steps 502, 508, 602, 608, etc., and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The edge domain node 90 provided in this embodiment is configured to execute the above-mentioned algorithm configuration method based on a scene, so that the same effect as that of the above-mentioned implementation method can be achieved.
Where an integrated unit is employed, the edge domain node 90 may include a processing module, a storage module, and a communication module. The processing module may be configured to control and manage actions of the edge domain node 90, for example, may be configured to support the edge domain node 90 to execute the steps executed by the integrating unit 903 and the determining unit 904. The storage modules may be used to support the server node 90 in storing program code, data, and the like. The communication module, which may include the sending unit 902 and the receiving unit 901 described above, may be used to support the edge domain node 90 to communicate with other devices, for example, communicate with a cloud center.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
In an embodiment, when the processing module is a processor, the storage module is a memory, and the communication module is a transceiver, the edge domain node 90 according to this embodiment may be a server having the structure shown in fig. 8.
Embodiments of the present application further provide a computer storage medium, where computer instructions are stored, and when the computer instructions are run on an electronic device (cloud center and edge domain nodes), the electronic device is caused to execute the above related method steps to implement the scene-based algorithm configuration method in the above embodiments.
Embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the method for configuring a scene-based algorithm executed by an electronic device (cloud center and edge domain nodes) in the above embodiments.
Another embodiment of the present application provides a system, which may include the cloud center, the at least one edge domain node, and at least one terminal device, and may be configured to implement the scene-based algorithm configuration method.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A scene-based algorithm configuration method is characterized by comprising the following steps:
the cloud center receives sampling data sent by a first edge domain node;
the cloud center determines a plurality of algorithm packages corresponding to the data types of the sampling data, wherein the data types of the sampling data are used for reflecting scenes when the terminal equipment acquires the sampling data;
the cloud center executes each algorithm packet in the multiple algorithm packets on the sampling data to obtain an algorithm result corresponding to each algorithm packet;
and the cloud center determines a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package, wherein the target algorithm package is used for analyzing the data acquired by the terminal equipment.
2. The method of claim 1, wherein the cloud center executing each algorithm package of the plurality of algorithm packages on the sample data to obtain an algorithm result corresponding to each algorithm package comprises:
the cloud center determines a plurality of evaluation indexes in an algorithm evaluation system corresponding to the data type of the sampling data;
the cloud center performs analysis on the sampled data through each algorithm package of the plurality of algorithm packages with reference to the plurality of evaluation indexes, and obtains values of the plurality of evaluation indexes corresponding to each algorithm package;
the cloud center determines an algorithm evaluation value corresponding to each algorithm package according to the values of the evaluation indexes corresponding to each algorithm package, and takes the algorithm evaluation value corresponding to each algorithm package as an algorithm result corresponding to each algorithm package;
the cloud center determines a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package, and the method comprises the following steps:
and the cloud center determines the algorithm packet with the highest algorithm evaluation value in the multiple algorithm packets as the target algorithm packet according to the algorithm evaluation value corresponding to each algorithm packet.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and the cloud center sends the target algorithm packet and an algorithm analysis task to the first edge domain node, wherein the algorithm analysis task is used for indicating the first edge domain node to analyze the data received from the terminal equipment through the target algorithm packet.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
the cloud center determines residual computing power respectively corresponding to a plurality of edge domain nodes managed by the cloud center, wherein the plurality of edge domain nodes comprise the first edge domain node;
the cloud center determines a second edge domain node according to the residual computing power respectively corresponding to the edge domain nodes, wherein the second edge domain node is the edge domain node with the highest residual computing power in the edge domain nodes;
and the cloud center sends the target algorithm packet and an algorithm analysis task to the second edge domain node, wherein the algorithm analysis task is used for indicating the second edge domain node to analyze the data, received from the first edge domain node, collected by the terminal equipment through the target algorithm packet.
5. The method of claim 4, wherein the cloud center determining the residual computation power corresponding to each of the plurality of edge domain nodes managed by the cloud center comprises:
and the cloud center determines the residual computing power corresponding to each edge domain node according to the hardware resource performance, the data access amount and the number of the accessed terminal devices corresponding to each edge domain node in the plurality of edge domain nodes.
6. A scene-based algorithm configuration method is characterized by comprising the following steps:
the edge domain node sends sampling data to the cloud center;
the edge domain node receives a first target algorithm packet sent by the cloud center, the first target algorithm packet is determined by an algorithm result corresponding to each algorithm packet obtained by the cloud center executing each algorithm packet in multiple algorithm packets on the sampled data, and the multiple algorithm packets are determined by the cloud center according to the data type of the sampled data; the type of the sampling data is used for reflecting a scene when the terminal equipment acquires the sampling data;
and the edge domain node loads the first target algorithm packet so as to analyze the data received from the terminal equipment through the first target algorithm packet.
7. The method of claim 6, wherein the edge domain node loading the first target algorithm package to analyze the data received from the end device with the first target algorithm package comprises:
the edge domain node determining whether the edge domain node has locally loaded the first target algorithm package;
if the edge domain node is determined not to be loaded with the first target algorithm package locally, the edge domain node loads the first target algorithm package so as to analyze the data received from the terminal equipment through the first target algorithm package.
8. The method of claim 6, further comprising:
the edge domain nodes receive a second target algorithm packet and an algorithm analysis task sent by the cloud center, the second target algorithm packet is determined by the cloud center based on sampling data sent by other edge domain nodes, and the algorithm analysis task comprises terminal equipment information to be analyzed;
the edge domain node acquires data acquired by the terminal equipment corresponding to the terminal equipment information to be analyzed from the other edge domain nodes;
and the edge domain node loads the second target algorithm packet so as to analyze the data received from other edge nodes through the second target algorithm packet.
9. A cloud center, comprising:
the communication module is used for receiving sampling data sent by a first edge domain node;
the processing module is used for determining a plurality of algorithm packages corresponding to the data types of the sampling data, and the data types of the sampling data are used for reflecting scenes when the terminal equipment acquires the sampling data;
the processing module is further configured to execute each algorithm packet in the multiple algorithm packets on the sample data to obtain an algorithm result corresponding to each algorithm packet;
the processing module is further configured to determine a target algorithm package from the multiple algorithm packages according to the algorithm result corresponding to each algorithm package, where the target algorithm package is used to analyze data acquired by the terminal device.
10. An edge domain node, the edge domain node comprising:
the communication module is used for sending the sampling data to the cloud center;
the communication module is further configured to receive a first target algorithm packet sent by the cloud center, where the first target algorithm packet is determined by an algorithm result obtained by the cloud center executing each algorithm packet of multiple algorithm packets on the sampled data, and the algorithm result corresponding to each algorithm packet; the multiple algorithm packages are determined by the cloud center according to the data types of the sampling data; the type of the sampling data is used for reflecting a scene when the terminal equipment acquires the sampling data;
and the processing module is used for loading the first target algorithm packet so as to analyze the data received from the terminal equipment through the first target algorithm packet.
11. An electronic device, comprising: a memory, a transceiver, and a processor; the memory, the transceiver, and the processor are coupled; the memory for storing computer program code, the computer program code comprising computer instructions; the transceiver is used for receiving data and transmitting data; the computer instructions, when executed by a processor, cause the electronic device to perform the method of any of claims 1-8.
12. A computer-readable storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-8.
CN202110867970.1A 2021-07-29 2021-07-29 Scene-based algorithm configuration method and device Active CN113596158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110867970.1A CN113596158B (en) 2021-07-29 2021-07-29 Scene-based algorithm configuration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110867970.1A CN113596158B (en) 2021-07-29 2021-07-29 Scene-based algorithm configuration method and device

Publications (2)

Publication Number Publication Date
CN113596158A true CN113596158A (en) 2021-11-02
CN113596158B CN113596158B (en) 2024-11-01

Family

ID=78252166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110867970.1A Active CN113596158B (en) 2021-07-29 2021-07-29 Scene-based algorithm configuration method and device

Country Status (1)

Country Link
CN (1) CN113596158B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708643A (en) * 2022-06-02 2022-07-05 杭州智诺科技股份有限公司 Computing power improving method for edge video analysis device and edge video analysis device
CN115794201A (en) * 2022-11-03 2023-03-14 中国石油天然气集团有限公司 Method and device for determining drilling conditions, server and readable storage medium
CN116578323A (en) * 2023-07-14 2023-08-11 湖南睿图智能科技有限公司 Deep learning algorithm iteration method based on Yun Bian cooperation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131379A (en) * 2019-11-08 2020-05-08 西安电子科技大学 Distributed flow acquisition system and edge calculation method
CN111405241A (en) * 2020-02-21 2020-07-10 中国电子技术标准化研究院 Edge calculation method and system for video monitoring
CN111464650A (en) * 2020-04-03 2020-07-28 北京绪水互联科技有限公司 Data analysis method, equipment, system and storage medium
WO2021003692A1 (en) * 2019-07-10 2021-01-14 深圳市大疆创新科技有限公司 Algorithm configuration method, device, system, and movable platform
US20210096911A1 (en) * 2020-08-17 2021-04-01 Essence Information Technology Co., Ltd Fine granularity real-time supervision system based on edge computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021003692A1 (en) * 2019-07-10 2021-01-14 深圳市大疆创新科技有限公司 Algorithm configuration method, device, system, and movable platform
CN111131379A (en) * 2019-11-08 2020-05-08 西安电子科技大学 Distributed flow acquisition system and edge calculation method
CN111405241A (en) * 2020-02-21 2020-07-10 中国电子技术标准化研究院 Edge calculation method and system for video monitoring
CN111464650A (en) * 2020-04-03 2020-07-28 北京绪水互联科技有限公司 Data analysis method, equipment, system and storage medium
US20210096911A1 (en) * 2020-08-17 2021-04-01 Essence Information Technology Co., Ltd Fine granularity real-time supervision system based on edge computing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708643A (en) * 2022-06-02 2022-07-05 杭州智诺科技股份有限公司 Computing power improving method for edge video analysis device and edge video analysis device
CN115794201A (en) * 2022-11-03 2023-03-14 中国石油天然气集团有限公司 Method and device for determining drilling conditions, server and readable storage medium
CN115794201B (en) * 2022-11-03 2024-02-27 中国石油天然气集团有限公司 Method and device for determining drilling conditions, server and readable storage medium
CN116578323A (en) * 2023-07-14 2023-08-11 湖南睿图智能科技有限公司 Deep learning algorithm iteration method based on Yun Bian cooperation

Also Published As

Publication number Publication date
CN113596158B (en) 2024-11-01

Similar Documents

Publication Publication Date Title
CN113596158B (en) Scene-based algorithm configuration method and device
US11443555B2 (en) Scenario recreation through object detection and 3D visualization in a multi-sensor environment
Ahmed et al. Fog computing applications: Taxonomy and requirements
US20210343136A1 (en) Event entity monitoring network and method
US10009579B2 (en) Method and system for counting people using depth sensor
US8107677B2 (en) Measuring a cohort'S velocity, acceleration and direction using digital video
Foresti et al. Situational awareness in smart environments: socio-mobile and sensor data fusion for emergency response to disasters
CN112166439A (en) True-to-composite image domain transfer
CN109145680A (en) A kind of method, apparatus, equipment and computer storage medium obtaining obstacle information
CN117056558A (en) Distributed video storage and search using edge computation
US11995766B2 (en) Centralized tracking system with distributed fixed sensors
Himeur et al. Deep visual social distancing monitoring to combat COVID-19: A comprehensive survey
CN112684430A (en) Indoor old person walking health detection method and system, storage medium and terminal
Fawzi et al. Embedded real-time video surveillance system based on multi-sensor and visual tracking
Zhao et al. Vivid: Augmenting vision-based indoor navigation system with edge computing
CN115393681A (en) Target fusion method and device, electronic equipment and storage medium
CN114218992A (en) Abnormal object detection method and related device
CN117749836B (en) Internet of things terminal monitoring method and system based on artificial intelligence
Pennisi et al. Multi-robot surveillance through a distributed sensor network
Kristiani et al. Flame and smoke recognition on smart edge using deep learning
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
JayaSudha et al. Intelligent Wearable Devices Enabled Automatic Vehicle Detection and Tracking System with Video‐Enabled UAV Networks Using Deep Convolutional Neural Network and IoT Surveillance
CN114782883A (en) Abnormal behavior detection method, device and equipment based on group intelligence
Rodrigues et al. Fusion data tracking system (fits)
US12056561B1 (en) Computer vision and RFID object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant