Nothing Special   »   [go: up one dir, main page]

CN105847684A - Unmanned aerial vehicle - Google Patents

Unmanned aerial vehicle Download PDF

Info

Publication number
CN105847684A
CN105847684A CN201610204540.0A CN201610204540A CN105847684A CN 105847684 A CN105847684 A CN 105847684A CN 201610204540 A CN201610204540 A CN 201610204540A CN 105847684 A CN105847684 A CN 105847684A
Authority
CN
China
Prior art keywords
target
unmanned plane
rgbd
processor
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610204540.0A
Other languages
Chinese (zh)
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201610204540.0A priority Critical patent/CN105847684A/en
Publication of CN105847684A publication Critical patent/CN105847684A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle, comprising a RGBD camera, a flight control unit and a processor, wherein the processor is in connection with the RGBD camera and the flight control unit; the RGBD camera is used for obtaining the RGBD image information of a target in real time in a flight process, the RGBD image information including R (red), G (green) and B (blue) pixel information and corresponding depth information; the processor is used for processing the R,G, B pixel information in real time so as to identify the target, and obtaining the real time distance with the target according to the depth information corresponding to the target; the flight control unit is used for adjusting the flight posture of the unmanned aerial vehicle according to the real time distance, allowing the RGBD camera to perform tracking shooting on the target. According to the mode, the unmanned aerial vehicle can automatically realize target tracking shooting.

Description

Unmanned plane
Technical field
The present invention relates to unmanned plane field, particularly relate to a kind of unmanned plane.
Background technology
Along with microelectric technique and the development of computer vision technique so that target following is able to Real-time implementation, is especially installed to target tracker on unmanned plane, it is possible to achieve to target Dynamic tracking, has higher use value in military and civilian field flexibly.
In the target following technology of tradition unmanned plane, generally use laser, radar and ultrasonic etc. actively Environment perception method, its shortcoming is to directly obtain the unknown message of target, and multiple nothing Can interfere during people's machine testing, more drawback is disguised poor, by enemy in battlefield surroundings The big increase of probability found.
When existing unmanned plane is directed generally to increase boat, improves speed, stealthy body, reduce body Long-pending, highly intelligence, load weapon, strengthen transmission reliability and versatility, enable unmanned plane by Predetermined combat duty is completed according to instruction or program prepared in advance.And on existing unmanned plane Camera be normally applied 2D camera to shoot 2D image, in image, each pixel only includes red (Red, R), green (Green, G), blue (Blue, B) pixel, do not include depth information D. Such existing unmanned plane cannot be automatically obtained target following shooting etc. according to shooting 2D image.
Summary of the invention
Embodiments provide a kind of unmanned plane, it is possible to be automatically obtained the track up of target.
The present invention provides a kind of unmanned plane, unmanned plane include RGBD camera, flight controller and Processor, processor is connected with RGBD camera and flight controller, wherein: RGBD camera, For obtaining the RGBD image of target, each pixel in RGBD image in flight course in real time Point includes the depth information of R, G, B Pixel Information and correspondence;Processor in real time to R, G, B Pixel Information processes, to identify target, and according to Depth Information Acquistion corresponding to target with The real-time distance of target;Flight controller according in real time distance adjust unmanned plane flight attitude and/ Or screening-mode so that RGBD camera is tracked shooting to target.
Wherein, RGBD camera is additionally operable to shoot the different gestures of user's input, and processor is not according to Producing corresponding control instruction with gesture, flight controller selects screening-mode according to control instruction.
Wherein, unmanned plane also includes that voice acquisition module, voice acquisition module are connected with processor, Voice acquisition module is for obtaining the voice of user's input, and processor produces control according to voice and refers to Order.
Wherein, target is specific human body, and processor is examined according to described R, G, B Pixel Information Survey the face feature of human body to lock human body.
Wherein, processor utilizes depth information to remove background, extracts target.
Wherein, processor is according to the Depth Information Acquistion target of R, G, B Pixel Information and correspondence Real-time distance to RGBD camera.
Wherein, processor is according to the depth information identification target of R, G, B Pixel Information and correspondence For rigid body or non-rigid.
Wherein, processor is further used in real time deep to R, G, B Pixel Information and correspondence Degree information is identified processing, with lock onto target.
Wherein, target is specific human body, and processor is according to R, G, B Pixel Information and right The face mask of the depth information detection human body answered is to lock human body.
Wherein, target is one or more, and unmanned plane analyzes the dynamic behaviour of one or more targets Trend.
Wherein, unmanned plane also includes radio communication unit, and radio communication unit is connected with processor, Video for being arrived by track up sends to far-end server, and wherein far-end server can be cloud End server or ground based terminal server.
Wherein, track up to video include 2D video and RGBD image sequence, data are sent out Module is sent to send 2D video and RGBD image sequence to far-end server, so that far-end clothes Business device generates 3D video according to 2D video and RGBD image sequence.
Wherein, track up to video include 2D video and RGBD image sequence, processor Target is carried out characteristic point mark, in target according to 2D video and RGBD image sequence further Edge, key node arranges characteristic point thus forms the skeleton grid of target, according to skeleton grid Generate 3D video and send to far-end server.
By such scheme, the invention has the beneficial effects as follows: flown at unmanned plane by RGBD camera Obtaining RGBD image information during row in real time, RGBD image information includes R, G, B picture Prime information and corresponding depth information;R, G, B Pixel Information is processed by processor in real time, To identify target, and device is according to the real-time distance of Depth Information Acquistion corresponding to target with target, flies Line control unit adjusts flight attitude and/or the screening-mode of unmanned plane according to distance in real time so that RGBD camera is tracked shooting to target, it is possible to be automatically obtained the track up of target.
Accompanying drawing explanation
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, embodiment will be retouched below In stating, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below It is only some embodiments of the present invention, for those of ordinary skill in the art, is not paying On the premise of creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.Wherein:
Fig. 1 is the structural representation of the unmanned plane of first embodiment of the invention;
Fig. 2 a is the structural representation of the unmanned plane of second embodiment of the invention;
Fig. 2 b is the structural representation of the unmanned plane section in Fig. 2 a;
Fig. 2 c is the structural representation of the RGBD camera rotation of the unmanned plane in Fig. 2 a;
Fig. 3 is the schematic diagram that target is tracked by the unmanned plane of the embodiment of the present invention;
Fig. 4 is the schematic diagram that the unmanned plane of the embodiment of the present invention carries out 3D modeling to target;
Fig. 5 is the structural representation of the unmanned plane of third embodiment of the invention;
Fig. 6 is the structural representation of the unmanned plane of fourth embodiment of the invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, to the technical scheme in the embodiment of the present invention It is clearly and completely described, it is clear that described embodiment is only that a part of the present invention is real Execute example rather than whole embodiment.Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under not making performing creative labour premise, broadly falls into this The scope of invention protection.
Fig. 1 is the structural representation of the unmanned plane of first embodiment of the invention.As it is shown in figure 1, nothing Man-machine (unmanned air vehicle, UAV) 10 includes: RGBD camera 11, flight control Device 12 processed and processor 13.Processor 13 and RGBD camera 11 and flight controller 12 Connect.RGBD camera 11 is for obtaining RGBD image in real time in flight course.RGBD In image, each pixel includes the depth information of R, G, B Pixel Information and correspondence.Wherein as The depth information of element constitutes the two-dimensional pixel matrix of scene, is called for short depth map.Each pixel with its Position in scene is corresponding, and has the distance represented from certain reference position to its scene location Pixel value.In other words, depth map has the form of image, and pixel value points out the object of scene Topographical information rather than brightness and/or color.Processor 13 is used in real time R, G, B pixel Information processes, and to identify target and target characteristic, and obtains according to the depth information that target is corresponding Take the real-time distance with target.Flight controller 12 is for adjusting unmanned plane 10 according to distance in real time Flight attitude and/or screening-mode so that RGBD camera 11 target is tracked shooting. Specifically, flight controller 12 can receive by remote controller, the control unit such as voice, gesture The instruction sent, and according to instruction trace photographic subjects, wherein, the flight attitude of unmanned plane 10 Including taking off, hover, pitching, rolling, go off course, landing etc..
As shown in Figure 2 a, unmanned plane 20 can include at least two RGBD camera 210,211, And also include flight assembly 24 and The Cloud Terrace 25 (not shown).RGBD camera 210,211 sets Putting on The Cloud Terrace 25, The Cloud Terrace 25 is made a response for the attitudes vibration measuring carrier with stable The Cloud Terrace On RGBD camera 210,111, with facilitate 210,211 pairs of targets of RGBD camera carry out with Track shoots.Being provided with swingle 26 on The Cloud Terrace 25, RGBD camera 210,211 is along swingle The vertical direction of 26 is arranged.The profile of unmanned plane 20 as shown in Figure 2 b, in unmanned plane 20 Portion arranges circuit board, and processor 23 is arranged on circuit boards.Flight assembly 24 can include rotor Or fixed-wing, for guaranteeing the steady of flight attitude in the normal flight of unmanned plane and flight course Fixed.Preferably, as a example by four rotor wing unmanned aerial vehicles, four propellers are decussation structure, phase To four rotors there is identical direction of rotation, points two groups, the direction of rotation of two groups is different.With biography The helicopter of system is different, and quadrotor can only realize various by the speed changing propeller Action.In unmanned plane 20, RGBD camera 210,211 is separate setting, i.e. RGBD Camera 210,211 is separate to be shot, the most unaffected.Fig. 2 c is unmanned plane 20 Middle RGBD camera 211 rotates the structural representation of 60 degree.In embodiments of the present invention, unmanned The RGBD camera quantity of machine 20 is not limited to 2, specifically can extend swingle 26, long at it Degree increases RGBD camera on direction.The most in other embodiments of the invention, it is also possible near Few two RGBD camera levels are independently positioned on The Cloud Terrace 25, as set on The Cloud Terrace 25 Put multiple swingle to be respectively provided with RGBD camera.
In embodiments of the present invention, processor 13 is for carrying out R, G, B Pixel Information in real time Identifying processing, with lock onto target, specifically, can apply color image segmentation method, utilize the back of the body Scape Texture Segmentation goes out background image, then utilizes original image subtracting background image to obtain target figure Picture.Certainly the most in other embodiments of the present invention, it is also possible to application additive method identification target, and then Lock onto target.Target is specific human body, and processor 13 can be according to R, G, B Pixel Information The face feature of detection human body is to lock human body.The mode of track up includes front shooting, side One in shooting, back side shooting, top shooting or combination in any.
In embodiments of the present invention, deep according to R, G, B Pixel Information and correspondence of processor 13 Degree information identification target is rigid body or non-rigid.Specifically can utilize the depth information profile to target Being identified, distinguish that profile is rigid body or non-rigid, picking out target is that dynamic biological is (such as people Body) or the object of non-rigid.If rigid body is then identified as object, and whether target is led Dynamic motion.Wherein rigid body refers to the object that three dimensional structure will not change along with motion, rather than Rigid body is then contrary, and its three dimensional structure can change along with motion.
Processor 13 is also with rgb color information and target carries out feature identification, identifies thing The profile of body, color information, extract more clarification of objective, improves the recognition accuracy of target. Recognition methods is not limited to the training method commonly used, such as machine learning, degree of depth study scheduling algorithm.Such as Utilize RGB information, dynamic biological target carried out skin color model, or meets human body complexion feature, Then identify whether target is human body, is otherwise non-human.Processor 13 be also compatible with process sound, The information of other sensors such as infrared sensor, is used for identifying and detecting target and feature thereof, improves Accuracy rate.
If recognizing target is human body, then processor 13 identifies trunk, extremity, hands The human body such as portion, face, extract height, brachium, shoulder breadth, hand size, face size, The information such as countenance feature.Owing to human body is non-rigid, in the track up process of long period In, human body can not keep same posture, is susceptible to non-rigid change, needs to carry out model Rebuild, it is to avoid the non-rigid change of data.Processor 13 is first to RGBD camera 11 shooting The depth image of target removes background parts, owing to the depth value of background pixel point is than human body Depth value is big, and processor 13 can select a suitable threshold value, when the depth value of pixel is big When this threshold value, this pixel is labeled as background dot, removes from depth image, obtain people Body cloud data.Cloud data is converted into triangle grid data by processor 13 again, the most permissible Four fields on depth image are utilized to close according to this topology as the topological relation connected, cloud data System generates triangle grid data.Point is gone data to carry out denoising by processor 13 further, tool The multiframe cloud data at each visual angle sum-average arithmetic respectively can be removed big noise by body, then with bilateral Small noise is removed in filtering.The triangle grid data at multiple visual angles is finally spliced by processor 13 Form an entirety together, for carrying out Model Reconstruction.Processor 13 can use iteration to calculate Method rebuilds three-dimensional (3 D) manikin.In iterative algorithm, the data first found out master pattern with collect Between corresponding point, for use as change obligatory point below.Then using obligatory point as energy term, Minimize object function, thus be deformed to master pattern solve scan data, finally obtain deformation After master pattern parameter in human space, calculated human parameters is for changing next time Dai Zhong, completes the reconstruction of three-dimensional (3 D) manikin after so carrying out successive ignition.And then can identify The human bodies such as trunk, extremity, hand, face, extract height, brachium, shoulder breadth, The information such as hand size, face size, countenance feature.
RGBD camera 11 follows the tracks of human body target, and people according to the anthropometric dummy that processor 13 is rebuild The movement locus at each position of body.Processor 13 and then in order to analyze the attitude action of target, and root Analyze according to the attitude action of target, behavioral pattern etc. and extract identity information, so divide into child, Old man, adolescence etc..
If recognizing target is animal, then processor 13 can utilize similar human body target RGBD recognition methods, and RGBD image sequence method for tracking target is identified and target characteristic Identification extraction, does not repeats them here.
If recognizing target is inanimate, processor 13 utilizes depth information D to identify mesh Target overall size.Specifically, processor 13 can split depth map to find out the profile of target. Processor 13 and then utilize the RGB information of target, carries out object detection, identifies its color, or The information such as Quick Response Code.
In embodiments of the present invention, processor 13 can be further used in real time R, G, B picture The depth information of prime information and correspondence is identified processing, with lock onto target.Such as, target is Specific human body, processor 13 is examined according to the depth information of R, G, B Pixel Information and correspondence Survey the face mask of human body to lock human body.Specifically, processor 13 obtains the 3D of human body head Posture, and then obtain face mask.The 3D posture of human body head is has 6 degree of freedom Posture.Processor 13 can include motion sensor, and wherein sensor can be accelerometer, magnetic One or more in power meter and/or gyroscope.RGBD camera obtains the degree of depth with head The RGBD image sequence of information, the reference pose of head is by the frame in RGBD image sequence Middle acquisition, and define coordinate frame of reference.Depth information is used to come really relative to coordinate frame of reference The fixed spin matrix being associated with the head pose of the human body in multiple images and translation vector.Citing For, spin matrix and translation can be determined in two dimension by the image zooming-out characteristic point from head Vector.The depth information being associated with the characteristic point followed the tracks of can be used subsequently to determine head in three dimensions The spin matrix in portion and translation vector.The characteristic point extracted can be arbitrary.Based on spin matrix With the dimensional posture that translation vector determines head relative to reference pose.For example, can be by spy Levy image coordinate a little and corresponding depth information and comprise the reference pose phase with human body head The state of the spin matrix of association and translation vector and current orientation and position is supplied to expansion card Thalmann filter.Extended Kalman filter can be used for determining in multiple image relative to reference pose The spin matrix of each and the estimation of translation vector, can determine that human body head is relative from this estimation Dimensional posture in reference pose.Processor 13 obtains according to the 3D posture of the human body head obtained Face mask.
Processor 13 is according to described in the Depth Information Acquistion of described R, G, B Pixel Information and correspondence The real-time distance of target extremely described RGBD camera, specifically can be according to R, G, B Pixel Information With face's central point that corresponding depth information calculates human body, and with central point to unmanned plane 10 Distance as real-time distance.Certainly processor 13 can also according to R, G, B Pixel Information and Corresponding depth information calculates the barycenter of human body, and using the distance of barycenter and unmanned plane 10 as Distance in real time.Specifically, the depth information according to R, G, B Pixel Information and correspondence can be straight Obtain a certain body part taking target, such as face, palm etc., with RGBD camera 11 Near distance, takes this minimum distance real-time distance as barycenter Yu unmanned plane 10.
In embodiments of the present invention, target can be one or more, unmanned plane analyze this or The dynamic behaviour trend of multiple targets.During the unmanned plane 10 multiple target of track up simultaneously, permissible To multiple goal setting priority, the target that preferential track up priority is the highest, it is also possible to do not set Put priority, simultaneously the multiple target of track up.Fig. 3 is unmanned plane showing of being tracked target Being intended to, Fig. 3 a represents and is tracked target 1 and target 2, and Fig. 3 b is to follow the tracks of target 1 to be Example, the skeleton grid of acquisition target 1, can from Fig. 3 c so that target is carried out dynamic behaviour analysis Going out, target 1 is by Turning travel.Target 1 can be only tracked, with to mesh by unmanned plane 10 Mark 1 carries out behavior analysis.Two targets 1,2 can also be tracked by unmanned plane 10, such as figure 3a.The priority priority higher than target 2 of target 1 specifically can be set, naturally it is also possible to sets Putting the priority priority higher than target 1 of target 2, this is not restricted.Multiple targets are set When putting priority, between RGBD camera 11 and target, preset distance apart is constant, if multiple Target is positioned in the shooting visual field of RGBD camera 11 simultaneously, then flight controller 12 is simultaneously to this Multiple targets are tracked shooting, if during track up, a certain moment is due to multiple targets Between apart from each other, it is impossible to make all of target be positioned at the current bat of RGBD camera 11 the most simultaneously Taking the photograph in visual field, the most at least a target is not in the shooting visual field of RGBD camera 11, then fly The target that line control unit 12 selects priority the highest is tracked shooting.With while track up master As a example by people and pet dog and target that owner is higher priority, if during track up, house pet Canis familiaris L. runs away, and can not ensure that owner is positioned at the current of RGBD camera 11 with pet dog simultaneously simultaneously In shooting visual field, the most now flight controller 12 controls RGBD camera 11 track up owner, No longer track up pet dog.
It is not provided with priority, simultaneously during the multiple target of track up, it is ensured that multiple targets are all located at The shooting visual field of RGBD camera 11.If during track up, a certain moment is due to multiple Between target apart from each other, it is impossible to make all of target be positioned at the most simultaneously RGBD camera 11 work as In front shooting visual field, then flight controller 12 can by adjust RGBD camera 11 focal length, Or adjust the distance between RGBD camera 11 and target, as increased RGBD camera 11 Focal length, or increase the distance between RGBD camera 11 and target so that multiple targets all positions In the shooting visual field of the RGBD camera 11 after adjusting.Wherein, RGBD camera 11 and target Between distance be not more than the trial voyage or flight distance of unmanned plane 10, trial voyage or flight distance refers to guarantee unmanned plane 10 The maximum distance that can fly the most out of touch.Equally with while track up owner and pet dog and As a example by being not provided with priority target, if during track up, pet dog runs away, and can not protect simultaneously Card owner and pet dog are positioned in the current shooting visual field of RGBD camera 11 simultaneously, the most now fly Line control unit 12 controls RGBD camera 11 and increases focal length or control unmanned plane 10 wide, In current shooting visual field after making owner and pet dog again be positioned at the adjustment of RGBD camera 11, Continue track up owner and pet dog simultaneously.
In embodiments of the present invention, flight controller 12 controls the mesh that RGBD camera 11 photographs The video of target continuous print image construction target.And track up to video include 2D video and RGBD image sequence.Target is entered by processor 13 according to 2D video and RGBD image sequence Row characteristic point identifies.Specifically, processor 13 can be the most recognizable according to a frame RGBD image Clarification of objective point, is then modified characteristic point according to the RGBD image of continuous multi-angle. As a example by target is as human body, i.e. can recognize that the characteristic point of human body according to a frame RGBD image, specifically Ground, removes background to RGBD image and obtains the profile of human body, then by trunk part Heart point, the barycenter of head, the turning point at edge are designated as characteristic point, and according to human body proportion and deposit The articulare of elbow and leg is designated as characteristic point and obtains the whole of human body by the big data of human body of storage Characteristic point is as shown in the figure a in Fig. 4.Further according to the RGBD image of continuous multi-angle to spy Levy and be a little modified obtaining the skeleton grid of human body as shown in the figure b in Fig. 4.Processor 13 enters And according to skeleton mess generation 3D video.Unmanned plane 10 also includes memory element, regards for storage Frequently, the target 3D model of processor 13 preliminary treatment, 3D video etc..
Wherein, processor 13 includes according to skeleton mess generation 3D video: processor 13 is permissible To track up to as being tracked shooting to obtain the movement locus of skeleton grid.Meanwhile, RGBD Camera 11 obtains the RGBD image sequence of multi-angle, can include front, side and reverse side RGBD image sequence.Each frame in the RGBD image sequence of the multi-angle according to target is deep Degree image carries out mesh reconstruction, is spliced to form the threedimensional model of target.Processor 13 can be by mesh The movement locus of target skeleton grid mates with threedimensional model, and according to RGBD camera 11 The RGBD image sequence obtained obtains the 3D video such as the figure c in Fig. 4 of target.
Specifically, need, by RGBD camera 11 acquisition, at least two width carrying out three-dimensional reconstruction RGBD image, and need to build the deep of three-dimensional scenic according to this at least two width RGBD Image Acquisition Degree information and rgb pixel information.Flight controller 12 moves relative to RGBD camera 11 Target persistently follow the tracks of, determine the target relative position relative to RGBD camera 11.Root Determine that according to this relative position the image needing display in three-dimensional scenic is to be tracked shooting.
RGBD camera 11 obtains the RGBD image sequence of multi-angle can pass through multiple RGBD Scene is shot by camera 11 from different perspectives, it is possible to use single RGBD camera 11 moves Dynamic different position carries out shooting and obtains.Method based on shooting, user can utilize shooting many Open image or key frame of video carries out scene rebuilding.Single RGBD camera 11 is in the process of shooting In, the movement of RGBD camera 11 may be considered the movement at visual angle, if RGBD during shooting Camera 11 moves horizontally, then can photograph bigger scene, if RGBD camera 11 It is rotated into row shooting around object, then can photograph the visual angle that same object is different.
Position relatively includes but not limited to the information such as direction, angle, distance.And the change of relative position Change can be RGBD camera 11 self generation movement thus cause the relative position between target Change, it is also possible to be to make RGBD camera 11 produce with target under user's active operation when target Give birth to relative change in location, or RGBD camera remains stationary as, and target moves the phase para-position caused Put change, and both move the change in location between the two caused respectively, seldom repeat at this. But no matter which kind of reason causes both relative change in location, all can be shot by RGBD camera RGBD image determines relative position between the two.
Processor 13 needs to build the deep of three-dimensional scenic according to this at least two width RGBD Image Acquisition Degree information and rgb pixel information include: RGBD camera is by existing local algorithm or complete Office's algorithm, by the computing between different RGBD images, it is thus achieved that need the three-dimensional scenic built Depth information and rgb pixel information, for example, it is possible to by bundle adjustment algorithm, calculate three After the depth information of each pixel in dimension scene, can be by the form of RGBD to three dimensional field The rgb pixel information of each pixel of scape and depth information is indicated and record.Processor 13 In conjunction with structure and the relative position of three-dimensional scenic, generate the scene visual corresponding with each relative position Figure, forms RGBD image sequence, and then constitutes 3D video.
In embodiments of the present invention, processor 13 can also be directly by the skeleton grid of first object Movement locus mate with the threedimensional model of the second target preset, and according to RGBD camera The RGBD image sequence of 11 the second targets obtained obtains the 3D video of the second target.Wherein The skeleton grid of one target can be to be pre-stored in processor 13, or processor 13 is first followed the tracks of Shooting first object is to obtain the skeleton grid of first object.The foundation of the threedimensional model of the second target The method of the foundation with the threedimensional model of aforesaid target is identical.
In embodiments of the present invention, the memory capacity of the memory element within unmanned plane 10 has Limit, it is impossible to storing jumbo data, therefore see Fig. 5, unmanned plane 10 also includes channel radio News unit 14.Radio communication unit 14 is connected with processor 13, for by track up to regard Take place frequently and deliver to far-end server.Far-end server is transmitted by radio communication unit 14 for processing RGBD image sequence, process high definition RGBD, generate high definition high-resolution target 3D mould Type, target 3D video or 3D animation etc..Wherein far-end server includes ground-based server and high in the clouds Server.Track up to video include 2D video and RGBD image sequence, if 2D The data volume of video and RGBD image sequence is too big, then radio communication unit 14 can be by 2D Video and RGBD image sequence send to far-end server, so that far-end server is according to 2D Video and RGBD image sequence generate 3D video, so can process the RGBD figure of big data As sequence, facilitate flight controller 12 to continue target and be tracked shooting.Radio communication unit 14 are additionally operable in real time by transmission such as the target 3D model of processor 13 preliminary treatment, 3D videos extremely Far-end server.
In embodiments of the present invention, RGBD camera 11 is additionally operable to shoot the different handss of user's input Gesture, processor 13 produces corresponding control instruction, flight controller 12 basis according to different gestures Control instruction selects screening-mode with target.Wherein, screening-mode include unmanned plane 10 start and stop, Target type is selected and track up mode is selected, and wherein target type includes human body.Gesture bag Including the five fingers opening and closing gesture, the five fingers opening and closing gesture includes that the five fingers open gesture and the five fingers Guan Bi gesture.With The gesture at family can also include but not limited to grasp, naturally raises one's hand, front push away, upper and lower, left and right pendulum Hands.The corresponding different control instruction of different gestures, as naturally raise one's hand, expression starts unmanned plane 10, Wave and represent the control instruction etc. adjusting unmanned plane 10 heading, at this not in upper and lower, left and right Describe in detail again.
Concrete implementation process is as follows:
RGBD camera 11 is for shooting the gesture of user's input, and processor 13 inputs according to user Gesture obtain the skeleton grid data of staff in real time, and extract relevant to unmanned plane 10 operation Skeleton grid data, and by the skeleton grid data of acquisition and unmanned plane 10 prestore with behaviour The skeleton grid data making relevant gesture model compares.If the skeleton grid data obtained At the grid data of gesture model in default threshold range, then the skeleton that processor 13 obtains Grid data is bound with the gesture model prestored.Processor 13 is obtained by RGBD camera The coordinate about the skeleton grid data of frame every in gesture RGBD image sequence realize gesture mould Type action coherent, reaches the motion effect of the gesture of gesture model simulation real user input in scene Really.The action that processor 13 mates with gesture model according to the gesture obtained.
In embodiments of the present invention, seeing Fig. 6, unmanned plane 10 also includes voice acquisition module 15, Voice acquisition module 15 electrically connects with processor 13.Voice acquisition module 15 is used for obtaining user The voice of input, processor 13 produces control instruction according to voice.Flight controller 12 is according to control System instruction selects screening-mode with target.
Specifically, remote control unit carries out recognition of face and carries out Application on Voiceprint Recognition.During recognition of face, people Face data base pre-saves face information (such as detect facial image by infrared signal and stay Deposit the physiological feature such as people's interorbital space, human eye length), when gathering, collect people by infrared signal Face data are made comparisons with the data in face database.If by recognition of face, then to receiving Voice further determine whether the voice for having voice-operated authority, determine that this voice institute is right The authority answered, and carry out speech recognition.Remote control unit according to the result of recognition of face, is sentenced further Break and whether receive voice.Every has and sends the personnel of phonetic control command and all upload one section of training language Sound, and then obtain vocal print storehouse.Carrying out vocal print when comparing, the phonetic order person of sending sends phonetic order, This phonetic order is carried out vocal print contrast with voice print database.By vocal print and face information lookup sound Identity information corresponding in stricture of vagina data base and face database, thus confirm its authority.Remote control unit Further phonetic order is sent to the voice acquisition module 15 of unmanned plane.Voice acquisition module 15 By the security verification of phonetic order, and produced according to phonetic order by checking preprocessor 13 Raw control instruction, is sent to the flight controller 12 of unmanned plane.Flight controller 12 will receive Instruction corresponding to the symbol lookup of instruction needed for the operation time, then (real at this phonetic order For code on border) add this operation time afterwards.Flight controller 12 selects to clap according to control instruction Take the photograph the flight attitude of Schema control unmanned plane 10, such as flight speed, flying height, flight path And the distance etc. between peripheral obstacle.
Preferably, during track up, flight controller 12 is always according to RGBD image sequence R, G, B Pixel Information in row and the flight appearance of the depth information adjustment unmanned plane 10 of correspondence State collides with the target of surrounding preventing unmanned plane 10.Specifically, flight controller 12 According to the depth information of pixel each in RGBD image sequence determine target and surrounding objects with Distance between RGBD camera 11, and choose target and surrounding objects and RGBD camera 11 Between minimum distance, according to this minimum distance adjust unmanned plane 10 flight attitude.Such as flight Controller 12 determine this minimum distance less than predeterminable range time, then control unmanned plane 10 to away from The direction flight of target.And flight controller 12 also judges that this minimum distance is more than trial voyage or flight distance Time, then control the flight of unmanned plane 10 target-bound direction.
In embodiments of the present invention, in order to obtain target and surrounding objects and RGBD camera 11 Between minimum distance, unmanned plane 10 can arrange a RGBD camera 11, this RGBD phase Machine 11 rotates and is arranged at unmanned plane 10, and rotational angle is 0-180 degree.Unmanned plane 10 can also set Putting two RGBD cameras 11, each RGBD camera 11 is rotatably arranged at described unmanned plane 10, corresponding rotational angle is 0-90 degree.RGBD camera 11 rotary scanning shoots, and obtains nothing The multi-angle RGBD image sequence of man-machine 10 surrounding objects, and then make the flight controller 12 can Obtain the minimum distance between target and surrounding objects and RGBD camera 11.Certainly at this In other bright embodiments, unmanned plane 10 can also use other distance-finding method to measure mesh around Minimum distance between mark and RGBD camera 11.Such as, laser range finder, ultrasound wave are used The distance between surrounding objects and RGBD camera 11 measured by diastimeter and infrared range-measurement system etc., And and target and RGBD camera 11 between distance compare, select minimum distance.And adopt Surrounding objects and RGBD is measured with laser range finder, ultrasonic range finder and infrared range-measurement system etc. The method of the distance between camera 11 is same as the prior art, is not described in detail in this.
In sum, the present invention obtains RGBD by RGBD camera in flight course in real time Image information, RGBD image information includes the depth information of R, G, B Pixel Information and correspondence; R, G, B Pixel Information is processed by processor in real time, to identify target, and according to target Corresponding Depth Information Acquistion and the real-time distance of target;Flight controller adjusts according to distance in real time The flight attitude of unmanned plane so that RGBD camera is tracked shooting, such unmanned plane to target The distance of RGBD camera and target can be obtained from RGBD image information, it is achieved automatically with Track shoots.
The foregoing is only embodiments of the invention, not thereby limit the scope of the claims of the present invention, Every equivalent structure utilizing description of the invention and accompanying drawing content to be made or equivalence flow process conversion, or Directly or indirectly being used in other relevant technical fields, the patent being the most in like manner included in the present invention is protected In the range of protecting.

Claims (13)

1. a unmanned plane, it is characterised in that described unmanned plane includes RGBD camera, flight Controller and processor, described processor and described RGBD camera and described flight controller Connect, wherein:
Described RGBD camera, for obtaining the RGBD image of target in flight course in real time, In described RGBD image, each pixel includes the degree of depth letter of R, G, B Pixel Information and correspondence Breath;
Described processor, for processing described R, G, B Pixel Information in real time, to know Other target, and according to the real-time distance of Depth Information Acquistion corresponding to described target with described target;
Described flight controller for adjusting the flight appearance of described unmanned plane according to described real-time distance State and/or screening-mode so that described RGBD camera is tracked shooting to described target.
Unmanned plane the most according to claim 1, it is characterised in that described RGBD camera Being additionally operable to shoot the different gestures of user's input, it is right that described processor produces according to described different gestures The control instruction answered, described flight controller selects screening-mode according to described control instruction.
Unmanned plane the most according to claim 1, it is characterised in that described unmanned plane also includes Voice acquisition module, described voice acquisition module is connected with described processor, and described voice obtains mould Block is for obtaining the voice of user's input, and described processor produces described control according to described voice and refers to Order.
Unmanned plane the most according to claim 2, it is characterised in that described target is specific Human body, the face that described processor detects described human body according to described R, G, B Pixel Information is special Levy to lock described human body.
Unmanned plane the most according to claim 1, it is characterised in that described processor utilizes institute State depth information and remove background, extract described target.
Unmanned plane the most according to claim 1, it is characterised in that described processor is according to institute State target described in the Depth Information Acquistion of R, G, B Pixel Information and correspondence to described RGBD phase The real-time distance of machine.
Unmanned plane the most according to claim 1, it is characterised in that described processor is according to institute Stating target described in the depth information identification of R, G, B Pixel Information and correspondence is rigid body or non-rigid.
Unmanned plane the most according to claim 1, it is characterised in that described processor is further It is identified processing for the real-time depth information to described R, G, B Pixel Information and correspondence, With lock onto target.
Unmanned plane the most according to claim 8, it is characterised in that described target is specific Human body, described processor is examined according to the depth information of described R, G, B Pixel Information and correspondence Survey the face mask of described human body to lock described human body.
Unmanned plane the most according to claim 1, it is characterised in that described target is one Or multiple, described unmanned plane analyzes the dynamic behaviour trend of the one or more target.
11. unmanned planes according to claim 1, it is characterised in that described unmanned plane also wraps Including radio communication unit, described radio communication unit is connected with described processor, for tracking being clapped The video taken the photograph sends to far-end server, and wherein said far-end server can be cloud server Or ground based terminal server.
12. unmanned planes according to claim 11, it is characterised in that described track up arrives Video include 2D video and RGBD image sequence, described data transmission blocks is by described 2D Video and described RGBD image sequence send to described far-end server, so that described far-end clothes Business device generates 3D video according to described 2D video and described RGBD image sequence.
13. unmanned planes according to claim 11, it is characterised in that described track up arrives Video include 2D video and RGBD image sequence, described processor is further according to described 2D Video and described RGBD image sequence carry out characteristic point mark to described target, in described target Edge, key node arrange characteristic point thus form the skeleton grid of target, according to described skeleton Lattice generate 3D video and send to described far-end server.
CN201610204540.0A 2016-03-31 2016-03-31 Unmanned aerial vehicle Pending CN105847684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610204540.0A CN105847684A (en) 2016-03-31 2016-03-31 Unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610204540.0A CN105847684A (en) 2016-03-31 2016-03-31 Unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN105847684A true CN105847684A (en) 2016-08-10

Family

ID=56597918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610204540.0A Pending CN105847684A (en) 2016-03-31 2016-03-31 Unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN105847684A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020227A (en) * 2016-08-12 2016-10-12 北京奇虎科技有限公司 Control method and device for unmanned aerial vehicle
CN106347550A (en) * 2016-09-05 2017-01-25 北京小米移动软件有限公司 Method and device for controlling balance car
CN106774947A (en) * 2017-02-08 2017-05-31 亿航智能设备(广州)有限公司 A kind of aircraft and its control method
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
CN106843275A (en) * 2017-04-01 2017-06-13 成都通甲优博科技有限责任公司 A kind of unmanned plane pinpoints method, device and the system of being diversion
CN106991413A (en) * 2017-05-04 2017-07-28 上海耐相智能科技有限公司 A kind of unmanned plane
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107374638A (en) * 2017-07-07 2017-11-24 华南理工大学 A kind of height measuring system and method based on binocular vision module
CN107505951A (en) * 2017-08-29 2017-12-22 深圳市道通智能航空技术有限公司 A kind of method for tracking target, unmanned plane and computer-readable recording medium
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
WO2018058264A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Video-based control method, device, and flying apparatus
WO2018058309A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Control method, control device, electronic device, and aerial vehicle control system
CN107894836A (en) * 2017-11-22 2018-04-10 河南大学 Remote sensing image processing and the man-machine interaction method of displaying based on gesture and speech recognition
CN108153325A (en) * 2017-11-13 2018-06-12 上海顺砾智能科技有限公司 The control method and device of Intelligent unattended machine
CN108196534A (en) * 2017-12-26 2018-06-22 广东工业大学 A kind of multi-rotor unmanned aerial vehicle control terminal, flight control system and control method
CN108375986A (en) * 2018-03-30 2018-08-07 深圳市道通智能航空技术有限公司 Control method, device and the terminal of unmanned plane
CN108475072A (en) * 2017-04-28 2018-08-31 深圳市大疆创新科技有限公司 A kind of tracking and controlling method, device and aircraft
CN108513643A (en) * 2017-08-31 2018-09-07 深圳市大疆创新科技有限公司 A kind of paths planning method, aircraft, flight system
WO2018191840A1 (en) * 2017-04-17 2018-10-25 英华达(上海)科技有限公司 Interactive photographing system and method for unmanned aerial vehicle
CN108854031A (en) * 2018-05-29 2018-11-23 深圳臻迪信息技术有限公司 The method and relevant apparatus of exercise data are analyzed by unmanned camera work
CN109151435A (en) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 A kind of data processing method, terminal, server and computer storage medium
CN109661631A (en) * 2018-03-27 2019-04-19 深圳市大疆创新科技有限公司 Control method, device and the unmanned plane of unmanned plane
CN109709554A (en) * 2018-12-13 2019-05-03 广州极飞科技有限公司 Operating equipment and its control method and device
CN109859264A (en) * 2017-11-30 2019-06-07 北京机电工程研究所 A kind of aircraft of view-based access control model guiding catches control tracking system
WO2019140699A1 (en) * 2018-01-22 2019-07-25 SZ DJI Technology Co., Ltd. Methods and system for multi-target tracking
WO2019144291A1 (en) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 Flight control method, apparatus, and machine-readable storage medium
CN110325879A (en) * 2017-02-24 2019-10-11 亚德诺半导体无限责任公司 System and method for compress three-dimensional depth sense
CN110687902A (en) * 2016-12-21 2020-01-14 杭州零零科技有限公司 System and method for controller-free user drone interaction
CN111199576A (en) * 2019-12-25 2020-05-26 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN111247792A (en) * 2019-04-28 2020-06-05 深圳市大疆创新科技有限公司 Control method of unmanned aerial vehicle, unmanned aerial vehicle and computer readable storage medium
CN111275760A (en) * 2020-01-16 2020-06-12 上海工程技术大学 Unmanned aerial vehicle target tracking system and method based on 5G and depth image information
US10719087B2 (en) 2017-08-29 2020-07-21 Autel Robotics Co., Ltd. Target tracking method, unmanned aerial vehicle, and computer readable storage medium
CN111490491A (en) * 2020-04-30 2020-08-04 国网上海市电力公司 Ultra-high voltage transmission line inspection unmanned aerial vehicle based on deep learning
CN112753210A (en) * 2020-04-26 2021-05-04 深圳市大疆创新科技有限公司 Movable platform, control method thereof and storage medium
CN113129468A (en) * 2021-04-06 2021-07-16 深圳市艾赛克科技有限公司 Underground pipe gallery inspection method based on unmanned aerial vehicle
CN113128447A (en) * 2021-04-29 2021-07-16 深圳市道通智能航空技术股份有限公司 Mask identification method and device, unmanned aerial vehicle and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887587A (en) * 2010-07-07 2010-11-17 南京邮电大学 Multi-target track method based on moving target detection in video monitoring
CN102142147A (en) * 2010-01-29 2011-08-03 索尼公司 Device and method for analyzing site content as well as device and method for detecting and tracking target
CN102779347A (en) * 2012-06-14 2012-11-14 清华大学 Method and device for tracking and locating target for aircraft
US20130136300A1 (en) * 2011-11-29 2013-05-30 Qualcomm Incorporated Tracking Three-Dimensional Objects
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104243901A (en) * 2013-06-21 2014-12-24 中兴通讯股份有限公司 Multi-target tracking method based on intelligent video analysis platform and system of multi-target tracking method
CN104236548A (en) * 2014-09-12 2014-12-24 清华大学 Indoor autonomous navigation method for micro unmanned aerial vehicle
CN104808799A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
US20150244976A1 (en) * 2014-02-26 2015-08-27 Microsoft Corporation Telepresence experience
CN205453893U (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142147A (en) * 2010-01-29 2011-08-03 索尼公司 Device and method for analyzing site content as well as device and method for detecting and tracking target
CN101887587A (en) * 2010-07-07 2010-11-17 南京邮电大学 Multi-target track method based on moving target detection in video monitoring
US20130136300A1 (en) * 2011-11-29 2013-05-30 Qualcomm Incorporated Tracking Three-Dimensional Objects
CN102779347A (en) * 2012-06-14 2012-11-14 清华大学 Method and device for tracking and locating target for aircraft
CN104243901A (en) * 2013-06-21 2014-12-24 中兴通讯股份有限公司 Multi-target tracking method based on intelligent video analysis platform and system of multi-target tracking method
US20150244976A1 (en) * 2014-02-26 2015-08-27 Microsoft Corporation Telepresence experience
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104236548A (en) * 2014-09-12 2014-12-24 清华大学 Indoor autonomous navigation method for micro unmanned aerial vehicle
CN104808799A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
CN205453893U (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘焜,蔡江辉,刘小君等: "《变形曲线曲面主动轮廓模型方法》", 30 September 2012 *
朱德海: "《点云库PCL学习教程》", 31 October 2012 *

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020227A (en) * 2016-08-12 2016-10-12 北京奇虎科技有限公司 Control method and device for unmanned aerial vehicle
CN106020227B (en) * 2016-08-12 2019-02-26 北京奇虎科技有限公司 The control method of unmanned plane, device
CN106347550A (en) * 2016-09-05 2017-01-25 北京小米移动软件有限公司 Method and device for controlling balance car
CN106347550B (en) * 2016-09-05 2019-08-06 北京小米移动软件有限公司 Balance car control method and device
WO2018058309A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Control method, control device, electronic device, and aerial vehicle control system
CN108351651A (en) * 2016-09-27 2018-07-31 深圳市大疆创新科技有限公司 A kind of control method, device and aircraft based on image
WO2018058264A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Video-based control method, device, and flying apparatus
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
CN110687902B (en) * 2016-12-21 2020-10-20 杭州零零科技有限公司 System and method for controller-free user drone interaction
CN110687902A (en) * 2016-12-21 2020-01-14 杭州零零科技有限公司 System and method for controller-free user drone interaction
CN106774947A (en) * 2017-02-08 2017-05-31 亿航智能设备(广州)有限公司 A kind of aircraft and its control method
CN110325879B (en) * 2017-02-24 2024-01-02 亚德诺半导体国际无限责任公司 System and method for compressed three-dimensional depth sensing
CN110325879A (en) * 2017-02-24 2019-10-11 亚德诺半导体无限责任公司 System and method for compress three-dimensional depth sense
CN106843275A (en) * 2017-04-01 2017-06-13 成都通甲优博科技有限责任公司 A kind of unmanned plane pinpoints method, device and the system of being diversion
WO2018191840A1 (en) * 2017-04-17 2018-10-25 英华达(上海)科技有限公司 Interactive photographing system and method for unmanned aerial vehicle
CN109121434B (en) * 2017-04-17 2021-07-27 英华达(上海)科技有限公司 Unmanned aerial vehicle interactive shooting system and method
CN109121434A (en) * 2017-04-17 2019-01-01 英华达(上海)科技有限公司 Unmanned plane interaction camera system and method
CN108475072A (en) * 2017-04-28 2018-08-31 深圳市大疆创新科技有限公司 A kind of tracking and controlling method, device and aircraft
US11587355B2 (en) * 2017-04-28 2023-02-21 SZ DJI Technology Co., Ltd. Tracking control method, device, and aircraft
WO2018195979A1 (en) * 2017-04-28 2018-11-01 深圳市大疆创新科技有限公司 Tracking control method and apparatus, and flight vehicle
CN106991413A (en) * 2017-05-04 2017-07-28 上海耐相智能科技有限公司 A kind of unmanned plane
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107374638A (en) * 2017-07-07 2017-11-24 华南理工大学 A kind of height measuring system and method based on binocular vision module
CN107505951A (en) * 2017-08-29 2017-12-22 深圳市道通智能航空技术有限公司 A kind of method for tracking target, unmanned plane and computer-readable recording medium
WO2019041534A1 (en) * 2017-08-29 2019-03-07 深圳市道通智能航空技术有限公司 Target tracking method, unmanned aerial vehicle and computer-readable storage medium
CN107505951B (en) * 2017-08-29 2020-08-21 深圳市道通智能航空技术有限公司 Target tracking method, unmanned aerial vehicle and computer readable storage medium
US10719087B2 (en) 2017-08-29 2020-07-21 Autel Robotics Co., Ltd. Target tracking method, unmanned aerial vehicle, and computer readable storage medium
CN108513643A (en) * 2017-08-31 2018-09-07 深圳市大疆创新科技有限公司 A kind of paths planning method, aircraft, flight system
CN107844744A (en) * 2017-10-09 2018-03-27 平安科技(深圳)有限公司 With reference to the face identification method, device and storage medium of depth information
CN108153325A (en) * 2017-11-13 2018-06-12 上海顺砾智能科技有限公司 The control method and device of Intelligent unattended machine
CN107894836B (en) * 2017-11-22 2020-10-09 河南大学 Human-computer interaction method for processing and displaying remote sensing image based on gesture and voice recognition
CN107894836A (en) * 2017-11-22 2018-04-10 河南大学 Remote sensing image processing and the man-machine interaction method of displaying based on gesture and speech recognition
CN109859264A (en) * 2017-11-30 2019-06-07 北京机电工程研究所 A kind of aircraft of view-based access control model guiding catches control tracking system
CN108196534A (en) * 2017-12-26 2018-06-22 广东工业大学 A kind of multi-rotor unmanned aerial vehicle control terminal, flight control system and control method
CN111527463A (en) * 2018-01-22 2020-08-11 深圳市大疆创新科技有限公司 Method and system for multi-target tracking
CN111527463B (en) * 2018-01-22 2024-02-23 深圳市大疆创新科技有限公司 Method and system for multi-target tracking
US11704812B2 (en) 2018-01-22 2023-07-18 SZ DJI Technology Co., Ltd. Methods and system for multi-target tracking
WO2019140699A1 (en) * 2018-01-22 2019-07-25 SZ DJI Technology Co., Ltd. Methods and system for multi-target tracking
CN110312978A (en) * 2018-01-23 2019-10-08 深圳市大疆创新科技有限公司 Flight control method, device and machine readable storage medium
WO2019144291A1 (en) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 Flight control method, apparatus, and machine-readable storage medium
CN110312978B (en) * 2018-01-23 2022-06-24 深圳市大疆创新科技有限公司 Flight control method, flight control device and machine-readable storage medium
CN109661631A (en) * 2018-03-27 2019-04-19 深圳市大疆创新科技有限公司 Control method, device and the unmanned plane of unmanned plane
CN108375986A (en) * 2018-03-30 2018-08-07 深圳市道通智能航空技术有限公司 Control method, device and the terminal of unmanned plane
CN108854031A (en) * 2018-05-29 2018-11-23 深圳臻迪信息技术有限公司 The method and relevant apparatus of exercise data are analyzed by unmanned camera work
CN109151435A (en) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 A kind of data processing method, terminal, server and computer storage medium
CN109709554B (en) * 2018-12-13 2021-01-19 广州极飞科技有限公司 Work device, and control method and device thereof
CN109709554A (en) * 2018-12-13 2019-05-03 广州极飞科技有限公司 Operating equipment and its control method and device
CN111247792A (en) * 2019-04-28 2020-06-05 深圳市大疆创新科技有限公司 Control method of unmanned aerial vehicle, unmanned aerial vehicle and computer readable storage medium
CN111199576A (en) * 2019-12-25 2020-05-26 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN111199576B (en) * 2019-12-25 2023-08-18 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN111275760A (en) * 2020-01-16 2020-06-12 上海工程技术大学 Unmanned aerial vehicle target tracking system and method based on 5G and depth image information
CN112753210A (en) * 2020-04-26 2021-05-04 深圳市大疆创新科技有限公司 Movable platform, control method thereof and storage medium
CN111490491A (en) * 2020-04-30 2020-08-04 国网上海市电力公司 Ultra-high voltage transmission line inspection unmanned aerial vehicle based on deep learning
CN113129468A (en) * 2021-04-06 2021-07-16 深圳市艾赛克科技有限公司 Underground pipe gallery inspection method based on unmanned aerial vehicle
CN113128447A (en) * 2021-04-29 2021-07-16 深圳市道通智能航空技术股份有限公司 Mask identification method and device, unmanned aerial vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN105847684A (en) Unmanned aerial vehicle
CN205453893U (en) Unmanned aerial vehicle
CN105786016B (en) The processing method of unmanned plane and RGBD image
CN105892474A (en) Unmanned plane and control method of unmanned plane
CN105912980B (en) Unmanned plane and UAV system
US11861892B2 (en) Object tracking by an unmanned aerial vehicle using visual sensors
US11749124B2 (en) User interaction with an autonomous unmanned aerial vehicle
US11755041B2 (en) Objective-based control of an autonomous unmanned aerial vehicle
CN205693767U (en) Uas
Hu et al. Bio-inspired embedded vision system for autonomous micro-robots: The LGMD case
CN105930767B (en) A kind of action identification method based on human skeleton
WO2019006760A1 (en) Gesture recognition method and device, and movable platform
CN108303994B (en) Group control interaction method for unmanned aerial vehicle
CN109398688A (en) A kind of rotor flying double mechanical arms target positioning grasping system and method
CN105717933A (en) Unmanned aerial vehicle and unmanned aerial vehicle anti-collision method
CN110473232A (en) Image-recognizing method, device, storage medium and electronic equipment
CN109063532B (en) Unmanned aerial vehicle-based method for searching field offline personnel
KR102560798B1 (en) unmanned vehicle simulator
CN113228103A (en) Target tracking method, device, unmanned aerial vehicle, system and readable storage medium
CN108475442A (en) Augmented reality method, processor and unmanned plane for unmanned plane
Fischer et al. Markerless perspective taking for humanoid robots in unconstrained environments
Shen et al. Person tracking and frontal face capture with UAV
CN111966217A (en) Unmanned aerial vehicle control method and system based on gestures and eye movements
CN105159452A (en) Control method and system based on estimation of human face posture
CN105930766A (en) Unmanned plane

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160810

RJ01 Rejection of invention patent application after publication