CN113114950B - IoT camera control method and control system - Google Patents
IoT camera control method and control system Download PDFInfo
- Publication number
- CN113114950B CN113114950B CN202110588798.6A CN202110588798A CN113114950B CN 113114950 B CN113114950 B CN 113114950B CN 202110588798 A CN202110588798 A CN 202110588798A CN 113114950 B CN113114950 B CN 113114950B
- Authority
- CN
- China
- Prior art keywords
- moving
- camera
- targets
- module
- video image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/65—Control of camera operation in relation to power supply
- H04N23/651—Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
- Burglar Alarm Systems (AREA)
Abstract
The invention belongs to the technical field of camera control, and provides an IoT camera control method and an IoT camera control system, wherein the IoT camera control method comprises the following steps: acquiring a monitoring picture in a monitoring area; judging the number of moving targets in the monitoring picture; under the condition that the number of the moving targets is one, controlling the first camera to rotate to monitor the moving targets; under the condition that the number of the moving targets is two, controlling a first camera to rotate to monitor one of the moving targets; and judging whether the other moving target is in the edge area of the monitoring range of the first camera. The system comprises: the device comprises a monitoring picture acquisition module, a moving target quantity judgment module, a first control module, an edge area judgment module, a second control module, a moving direction judgment module, a third control module, a moving speed judgment module and a fourth control module. The IoT camera control method and the IoT camera control system provided by the invention meet the effect of simultaneously tracking two moving targets and reduce energy consumption.
Description
Technical Field
The invention relates to the technical field of camera control, in particular to an IoT camera control method and an IoT camera control system.
Background
The IoT camera is also called an Internet of things camera, and forms a whole set of ecosystem together with other Internet of things equipment, and the IoT camera is the eye of the whole system and plays roles in video monitoring, video acquisition or target tracking and the like.
The IoT camera can only track one moving target during use, when two moving targets appear in a monitoring picture, other cameras need to be started to track the other moving target, and the two cameras track simultaneously, so that although the tracking effect can be ensured, the consumption in the aspect of energy consumption is very high, and therefore, the tracking effect needs to be ensured, and the energy consumption is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the IoT camera control method and the IoT camera control system provided by the invention not only meet the effect of simultaneously tracking two moving targets, but also reduce the energy consumption.
In order to solve the technical problems, the invention provides the following technical scheme:
an IoT camera control method, comprising:
acquiring a monitoring picture in a monitoring area;
judging the number of moving targets in the monitoring picture;
under the condition that the number of the moving targets is one, controlling the first camera to rotate to monitor the moving targets;
under the condition that the number of the moving targets is two, controlling the first camera to rotate to monitor one of the moving targets;
judging whether the other moving target is in the edge area of the monitoring range of the first camera;
under the condition that the other moving target is in the edge area of the first camera monitoring range, judging the moving directions of the two moving targets;
and under the condition that the moving directions of the two moving targets are opposite, controlling the auxiliary camera to monitor the other moving target.
Under the condition that the moving directions of the two moving targets are the same, judging the moving speeds of the two moving targets;
and under the condition that the movement speed of the moving target in the edge area of the monitoring range of the first camera is less than the movement speed of the moving target which is monitored by the first camera, controlling the auxiliary camera to monitor the moving target in the edge area of the monitoring range of the first camera.
Further, the determining the number of moving objects in the monitoring picture includes:
intercepting a video image in a monitoring picture;
and (4) performing feature extraction on the video image through an HOG feature extraction algorithm to determine the number of the moving targets.
Further, the process of determining the moving directions of the two moving targets includes:
intercepting video images at the current moment and the previous moment in a monitoring picture;
respectively extracting the characteristics of the video images at the current moment and the previous moment through an HOG characteristic extraction algorithm to obtain the position coordinates of the two moving targets in the video images at the current moment and the video images at the previous moment;
and judging the motion direction of the current moment according to the change of the position coordinates of the corresponding moving target in the current moment video image and the previous moment video image.
Further, the motion speeds of the two moving objects are judged, and the motion speeds of the corresponding moving objects are calculated according to the change of the position coordinates of the corresponding moving objects in the current moment video image and the previous moment video image.
The present invention also provides an IoT camera control system, including:
the monitoring picture acquisition module is used for acquiring a monitoring picture in a monitoring area;
the moving target quantity judging module is used for judging the quantity of the moving targets in the monitoring picture;
the first control module is used for controlling the first camera to rotate to monitor the moving target under the condition that the number of the moving targets is one;
the edge area judging module is used for judging whether another moving target is in the edge area of the monitoring range of the first camera;
the second control module is used for controlling the first camera to rotate to monitor one of the moving targets under the condition that the number of the moving targets is two;
the moving direction judging module is used for judging the moving directions of the two moving targets under the condition that the other moving target is in the edge area of the monitoring range of the first camera;
the third control module is used for controlling the auxiliary camera to monitor the other moving target under the condition that the moving directions of the two moving targets are opposite;
the motion speed judging module is used for judging the motion speeds of the two moving targets under the condition that the moving directions of the two moving targets are the same;
and the fourth control module is used for controlling the auxiliary camera to monitor the moving target in the edge area of the monitoring range of the first camera under the condition that the moving speed of the moving target in the edge area of the monitoring range of the first camera is less than the moving speed of the moving target which is monitored by the first camera.
Further, comprising:
the video image intercepting module is used for intercepting a video image in the monitoring picture;
the moving target quantity determining module is used for performing feature extraction on the video image through an HOG feature extraction algorithm to determine the quantity of moving targets;
further, it includes:
the current moment and previous moment video image intercepting module is used for intercepting the video images at the current moment and the previous moment in the monitoring picture;
the device comprises a current moment and previous moment video image position coordinate determination module, a HOG feature extraction algorithm and a motion vector calculation module, wherein the current moment and previous moment video image position coordinate determination module is used for respectively carrying out feature extraction on the current moment and previous moment video images through the HOG feature extraction algorithm to obtain position coordinates of two moving targets in the current moment video images and previous moment video images;
and the motion direction judging module is used for judging the motion direction of the current moment according to the change of the position coordinates of the corresponding moving target in the current moment video image and the previous moment video image.
Further, the motion speeds of the two moving objects are judged, and the motion speeds of the corresponding moving objects are calculated according to the change of the position coordinates of the corresponding moving objects in the current moment video image and the previous moment video image.
According to the technical scheme, the invention has the beneficial effects that: when the number of the moving targets in the monitoring picture is two and one moving target is in the marginal area of the monitoring range of the first camera, the working state of the auxiliary camera is determined by judging the moving directions of the two moving targets and the moving speeds of the two moving targets, so that the effect of simultaneously tracking the two moving targets is met, and the energy consumption is reduced.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings used in the detailed description or the prior art description will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
Referring to fig. 1, the IoT camera control method provided in this embodiment includes:
and acquiring a monitoring picture in the monitoring area.
And judging the number of the moving targets in the monitoring picture.
And under the condition that the number of the moving targets is one, controlling the first camera to rotate to monitor the moving targets.
And under the condition that the number of the moving targets is two, controlling the first camera to rotate so as to monitor one of the moving targets.
And judging whether the other moving target is in an edge area of the monitoring range of the first camera, wherein the edge area of the monitoring range of the first camera can be preset by drawing an outer edge line along the outermost edge of the monitoring range, the outer edge line is a closed curve, an inner edge line is drawn at a position which is positioned in the monitoring range and is equidistant from the outer edge line, the inner edge line is a closed curve, the area between the inner edge line and the outer edge line is an edge area, and the distance between the inner edge line and the outer edge line can be set according to actual needs.
And under the condition that the other moving target is in the edge area of the monitoring range of the first camera, judging the moving directions of the two moving targets.
And under the condition that the moving directions of the two moving targets are opposite, controlling the auxiliary camera to monitor the other moving target, wherein the opposite moving directions indicate that the two moving targets are farther and farther, and the first camera cannot monitor the two moving targets.
When the moving directions of the two moving objects are the same, the moving speeds of the two moving objects are determined.
Under the condition that the movement speed of the moving target in the edge area of the first camera monitoring range is smaller than the movement speed of the moving target which is monitored by the first camera, the auxiliary camera is controlled to monitor the moving target in the edge area of the first camera monitoring range, under the condition that the movement speed of the moving target in the edge area of the first camera monitoring range is smaller than the movement speed of the moving target which is monitored by the first camera, the distance between the two moving targets is longer and longer, the two moving targets cannot be monitored through the first camera, in the process, the working state of the auxiliary camera is determined by judging the moving directions of the two moving targets and the movement speeds of the two moving targets, the effect of simultaneously tracking the two moving targets is met, and the energy consumption is reduced.
In this embodiment, the determining the number of moving objects in the monitoring screen includes:
and intercepting the video image in the monitoring picture.
The method comprises the steps of extracting features of video images through an HOG feature extraction algorithm, determining the number of moving targets, monitoring only through a first camera when one moving target is provided, and determining the working mode of an auxiliary camera according to the displacement, the moving direction and the moving speed of another moving target when the number of the moving targets is two, so that the monitoring range is ensured, and energy consumption is saved compared with the situation that two cameras are always in a working state.
In this embodiment, the process of determining the moving directions of the two moving targets includes:
and intercepting video images at the current moment and the previous moment in the monitoring picture.
And respectively carrying out feature extraction on the video images at the current moment and the previous moment through an HOG feature extraction algorithm to obtain the position coordinates of the two moving targets in the video images at the current moment and the video images at the previous moment.
Judging the motion direction of the current moment according to the change of the position coordinates of the corresponding moving target in the video image of the current moment and the video image of the previous moment, taking the position coordinate of the corresponding moving target in the video image of the previous moment as a starting point, taking the position coordinate of the corresponding moving target in the video image of the current moment as an end point, forming a coordinate vector which represents the motion direction of the corresponding moving target, and judging that the motion directions are the same when the included angle between the coordinate vectors corresponding to the two moving targets is less than 90 degrees.
In this embodiment, the determining the movement speeds of the two moving objects respectively calculates the movement speeds of the corresponding moving objects according to the changes of the position coordinates of the corresponding moving objects in the current time video image and the previous time video image. Because the moving direction of the moving target has multi-directionality, when determining whether the moving speed of the moving target in the edge area of the monitoring range of the first camera is less than the moving speed of the moving target already monitored by the first camera, it is first necessary to calculate a component of the moving speed of the moving target in the edge area of the monitoring range of the first camera in the moving direction of the moving target already monitored by the first camera, and then compare the component with the moving speed of the moving target already monitored by the first camera.
Referring to fig. 2, an IoT camera control system includes a monitoring image obtaining module, a moving target number determining module, a first control module, an edge area determining module, a second control module, a moving direction determining module, a third control module, a moving speed determining module, and a fourth control module.
The monitoring picture acquisition module is used for acquiring a monitoring picture in the monitoring area.
The moving target quantity judging module is used for judging the quantity of the moving targets in the monitoring picture.
The first control module is used for controlling the first camera to rotate to monitor the moving target under the condition that the number of the moving targets is one.
The edge area judging module is used for judging whether another moving target is located in an edge area of a first camera monitoring range, wherein the edge area of the first camera monitoring range can be preset, the setting method is that an outer edge line is drawn along the outermost edge of the monitoring range, the outer edge line is a closed curve, an inner edge line is drawn at a position which is located inside the monitoring range and is equidistant to the outer edge line, an area between the inner edge line and the outer edge line is an edge area, and the distance between the inner edge line and the outer edge line can be set according to actual needs.
The second control module is used for controlling the first camera to rotate to monitor one of the moving targets under the condition that the number of the moving targets is two.
The moving direction judging module is used for judging the moving directions of the two moving targets under the condition that the other moving target is in the edge area of the monitoring range of the first camera.
And the third control module is used for controlling the auxiliary camera to monitor the other moving target under the condition that the moving directions of the two moving targets are opposite. The moving directions are opposite, which means that the distance between the two moving targets is longer and longer, and the first camera cannot monitor the two moving targets.
The motion speed judging module is used for judging the motion speeds of the two moving targets under the condition that the moving directions of the two moving targets are the same.
The fourth control module is used for controlling the auxiliary camera to monitor the moving target in the edge area of the monitoring range of the first camera under the condition that the moving speed of the moving target in the edge area of the monitoring range of the first camera is smaller than the moving speed of the moving target monitored by the first camera, and under the condition that the moving speed of the moving target in the edge area of the monitoring range of the first camera is smaller than the moving speed of the moving target monitored by the first camera, the distance between the two moving targets is longer and longer, the two moving targets cannot be monitored by the first camera.
In the embodiment, the method comprises a video image intercepting module and a moving target number determining module.
The video image intercepting module is used for intercepting the video image in the monitoring picture.
The moving target number determining module is used for performing feature extraction on the video images through an HOG feature extraction algorithm, determining the number of moving targets, monitoring only through the first camera when one moving target is provided, and determining the working mode of the auxiliary camera according to the displacement, the movement direction and the movement speed of the other moving target when the number of moving targets is two, so that the monitoring range is ensured, and energy consumption is saved compared with the situation that the two cameras are always in a working state.
In this embodiment, the device comprises a current time and previous time video image capturing module, a current time and previous time video image position coordinate determining module and a motion direction judging module.
The current moment and previous moment video image intercepting module is used for intercepting the video images of the current moment and the previous moment in the monitoring picture.
The position coordinate determination module of the video images at the current moment and the previous moment is used for respectively carrying out feature extraction on the video images at the current moment and the previous moment through an HOG feature extraction algorithm to obtain the position coordinates of the two moving targets in the video images at the current moment and the video images at the previous moment.
The motion direction judging module is used for judging the motion direction of the current moment according to the change of the position coordinates of the corresponding moving target in the current moment video image and the previous moment video image, taking the position coordinate of the corresponding moving target in the previous moment video image as a starting point, taking the position coordinate of the corresponding moving target in the current moment video image as an end point, forming a coordinate vector which represents the motion direction of the corresponding moving target, and judging that the motion directions are the same when the included angle between the coordinate vectors corresponding to the two moving targets is less than 90 degrees.
In this embodiment, the determining the motion speeds of the two moving objects respectively calculates the motion speeds of the corresponding moving objects according to the changes of the position coordinates of the corresponding moving objects in the current video image and the previous video image, and because the motion directions of the moving objects have multi-directionality, when determining whether the motion speed of the moving object in the edge area of the monitoring range of the first camera is less than the motion speed of the moving object already monitored by the first camera, it is first necessary to calculate a component of the motion speed of the moving object in the edge area of the monitoring range of the first camera in the motion direction of the moving object already monitored by the first camera, and then compare the component with the motion speed of the moving object already monitored by the first camera.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being covered by the appended claims and their equivalents.
Claims (8)
1. An IoT camera control method, comprising:
acquiring a monitoring picture in a monitoring area;
judging the number of moving targets in the monitoring picture;
under the condition that the number of the moving targets is one, controlling the first camera to rotate to monitor the moving targets;
under the condition that the number of the moving targets is two, controlling a first camera to rotate to monitor one of the moving targets;
judging whether the other moving target is in the edge area of the monitoring range of the first camera;
under the condition that the other moving target is in the edge area of the first camera monitoring range, judging the moving directions of the two moving targets;
under the condition that the moving directions of the two moving targets are opposite, controlling the auxiliary camera to monitor the other moving target;
under the condition that the moving directions of the two moving targets are the same, judging the moving speeds of the two moving targets;
and under the condition that the movement speed of the moving target in the edge area of the monitoring range of the first camera is lower than the movement speed of the moving target which is monitored by the first camera, controlling the auxiliary camera to monitor the moving target in the edge area of the monitoring range of the first camera.
2. The IoT camera control method according to claim 1, wherein the determining the number of moving objects in the monitoring screen comprises:
intercepting a video image in a monitoring picture;
and (4) performing feature extraction on the video image through an HOG feature extraction algorithm to determine the number of the moving targets.
3. The IoT camera control method as claimed in claim 1, wherein the step of determining the moving direction of the two moving objects comprises:
intercepting video images at the current moment and the previous moment in a monitoring picture;
respectively extracting the characteristics of the video images at the current moment and the previous moment through an HOG characteristic extraction algorithm to obtain the position coordinates of the two moving targets in the video images at the current moment and the video images at the previous moment;
and judging the motion direction of the current moment according to the change of the position coordinates of the corresponding moving target in the current moment video image and the previous moment video image.
4. The IoT camera control method as claimed in claim 3, wherein the determining the moving speed of the two moving objects calculates the moving speed of the corresponding moving object according to the change of the position coordinates of the corresponding moving object in the current video image and the previous video image.
5. An IoT camera control system, comprising:
the monitoring picture acquisition module is used for acquiring a monitoring picture in a monitoring area;
the moving target quantity judging module is used for judging the quantity of the moving targets in the monitoring picture;
the first control module is used for controlling the first camera to rotate to monitor the moving target under the condition that the number of the moving targets is one;
the second control module is used for controlling the first camera to rotate to monitor one of the moving targets under the condition that the number of the moving targets is two;
the edge area judging module is used for judging whether another moving target is in the edge area of the monitoring range of the first camera;
the moving direction judging module is used for judging the moving directions of the two moving targets under the condition that the other moving target is in the edge area of the monitoring range of the first camera;
the third control module is used for controlling the auxiliary camera to monitor the other moving target under the condition that the moving directions of the two moving targets are opposite;
the motion speed judging module is used for judging the motion speeds of the two moving targets under the condition that the moving directions of the two moving targets are the same;
and the fourth control module is used for controlling the auxiliary camera to monitor the moving target in the edge area of the monitoring range of the first camera under the condition that the moving speed of the moving target in the edge area of the monitoring range of the first camera is less than the moving speed of the moving target which is monitored by the first camera.
6. The IoT camera control system as claimed in claim 5, comprising:
the video image intercepting module is used for intercepting a video image in the monitoring picture;
and the moving target quantity determining module is used for performing feature extraction on the video image through an HOG feature extraction algorithm to determine the quantity of the moving targets.
7. The IoT camera control system according to claim 5, comprising:
the current moment and previous moment video image intercepting module is used for intercepting the video images at the current moment and the previous moment in the monitoring picture;
the device comprises a current moment and previous moment video image position coordinate determination module, a HOG feature extraction module and a position coordinate determination module, wherein the current moment and previous moment video image position coordinate determination module is used for respectively carrying out feature extraction on the current moment and previous moment video images through an HOG feature extraction algorithm to obtain position coordinates of two moving targets in the current moment video images and the previous moment video images;
and the motion direction judging module is used for judging the motion direction of the current moment according to the change of the position coordinates of the corresponding moving target in the current moment video image and the previous moment video image.
8. The IoT camera control system as claimed in claim 7, wherein the determining the moving speed of the two moving objects calculates the moving speed of the corresponding moving object according to the position coordinates of the corresponding moving object in the current video image and the previous video image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110588798.6A CN113114950B (en) | 2021-05-28 | 2021-05-28 | IoT camera control method and control system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110588798.6A CN113114950B (en) | 2021-05-28 | 2021-05-28 | IoT camera control method and control system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113114950A CN113114950A (en) | 2021-07-13 |
CN113114950B true CN113114950B (en) | 2023-04-07 |
Family
ID=76723637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110588798.6A Active CN113114950B (en) | 2021-05-28 | 2021-05-28 | IoT camera control method and control system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113114950B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4238042B2 (en) * | 2003-02-07 | 2009-03-11 | 住友大阪セメント株式会社 | Monitoring device and monitoring method |
CN100531373C (en) * | 2007-06-05 | 2009-08-19 | 西安理工大学 | Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure |
CN110278413A (en) * | 2019-06-28 | 2019-09-24 | Oppo广东移动通信有限公司 | Image processing method, device, server and storage medium |
CN112104841B (en) * | 2020-11-05 | 2021-12-07 | 乐荣时代智能安防技术(深圳)有限公司 | Multi-camera intelligent monitoring method for monitoring moving target |
-
2021
- 2021-05-28 CN CN202110588798.6A patent/CN113114950B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113114950A (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6469932B2 (en) | System and method for performing automatic zoom | |
JP4241742B2 (en) | Automatic tracking device and automatic tracking method | |
CN105407283B (en) | A kind of multiple target initiative recognition tracing and monitoring method | |
CN110796010B (en) | Video image stabilizing method combining optical flow method and Kalman filtering | |
CN108363946B (en) | Face tracking system and method based on unmanned aerial vehicle | |
US9565348B2 (en) | Automatic tracking apparatus | |
CN105741325B (en) | A kind of method and movable object tracking equipment of tracked mobile target | |
WO2020182176A1 (en) | Method and apparatus for controlling linkage between ball camera and gun camera, and medium | |
CN108177146A (en) | Control method, device and the computing device of robot head | |
CN111213159A (en) | Image processing method, device and system | |
CN110827321A (en) | Multi-camera cooperative active target tracking method based on three-dimensional information | |
CN106791353B (en) | The methods, devices and systems of auto-focusing | |
CN110602376B (en) | Snapshot method and device and camera | |
WO2020014864A1 (en) | Pose determination method and device, and computer readable storage medium | |
US20230114785A1 (en) | Device and method for predicted autofocus on an object | |
CN108900775A (en) | A kind of underwater robot realtime electronic image stabilizing method | |
CN113114950B (en) | IoT camera control method and control system | |
CN114594770B (en) | Inspection method for inspection robot without stopping | |
CN113610896B (en) | Method and system for measuring target advance quantity in simple fire control sighting device | |
CN111414012A (en) | Region retrieval and holder correction method for inspection robot | |
TW201838400A (en) | Moving target position tracking system having a main control unit for electrically connecting the orientation adjustment mechanism, the first image tracking module, and the second image tracking module to control the tracking of the target position | |
KR101070448B1 (en) | The method for tracking object and the apparatus thereof | |
Liu et al. | Video stabilization algorithm for tunnel robots based on improved Kalman filter | |
Qigui | Search on automatic target tracking based on PTZ system | |
CN110415273B (en) | Robot efficient motion tracking method and system based on visual saliency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |