Nothing Special   »   [go: up one dir, main page]

CN105892474A - Unmanned plane and control method of unmanned plane - Google Patents

Unmanned plane and control method of unmanned plane Download PDF

Info

Publication number
CN105892474A
CN105892474A CN201610200004.3A CN201610200004A CN105892474A CN 105892474 A CN105892474 A CN 105892474A CN 201610200004 A CN201610200004 A CN 201610200004A CN 105892474 A CN105892474 A CN 105892474A
Authority
CN
China
Prior art keywords
unmanned plane
gesture
rgbd
target
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610200004.3A
Other languages
Chinese (zh)
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201610200004.3A priority Critical patent/CN105892474A/en
Publication of CN105892474A publication Critical patent/CN105892474A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an unmanned plane and a control method of the unmanned plane. The unmanned plane comprises a RGBD camera, a flight controller and a processor. The processor is connected with the RGBD camera and the flight controller, wherein the RGBD camera is used for obtaining RGBD images of a target in real time in the flight process of the unmanned plane, each pixel point in the RGBD images comprises R, G and B pixel information and corresponding depth information; the processor is used for processing the R, G and B pixel information and corresponding depth information in real time, identifying gestures of a user and outputting corresponding control instructions according to the gestures of the user; and the flight controller is used for adjusting the flight attitude and/or shooting mode of the unmanned plane according to the control instructions. By adopting the above mode, the transmission of the control instructions of a remote controller is reduced, the bandwidth occupation is further reduced, and the experience degree of the user is improved.

Description

Unmanned plane and unmanned aerial vehicle (UAV) control method
Technical field
The present invention relates to unmanned plane field, particularly relate to a kind of unmanned plane and unmanned aerial vehicle (UAV) control side Method.
Background technology
Along with microelectric technique and the development of computer vision technique so that target following is able to Real-time implementation, is especially installed to target tracker on unmanned plane, it is possible to achieve to target Dynamic tracking, has higher use value in military and civilian field flexibly.
In the target following technology of tradition unmanned plane, generally use laser, radar and ultrasonic etc. actively Environment perception method, its shortcoming is to directly obtain the unknown message of target, and multiple nothing Can interfere during people's machine testing, more drawback is disguised poor, by enemy in battlefield surroundings The big increase of probability found.
When existing unmanned plane is directed generally to increase boat, improves speed, stealthy body, reduce body Long-pending, highly intelligence, load weapon, strengthen transmission reliability and versatility, enable unmanned plane by Predetermined combat duty is completed according to instruction or program prepared in advance.And on existing unmanned plane Camera be normally applied 2D camera to shoot 2D image, in image, each pixel only includes red (Red, R), green (Green, G), blue (Blue, B) pixel, do not include depth information D. Such existing unmanned plane cannot be automatically obtained target following shooting etc. according to shooting 2D image.
Summary of the invention
Embodiments provide a kind of unmanned plane and unmanned aerial vehicle (UAV) control method, it is possible to automatically real The track up of existing target.
The present invention provides a kind of unmanned plane, unmanned plane include RGBD camera, flight controller and Processor, processor is connected with RGBD camera and flight controller, wherein: RGBD camera, For obtaining the RGBD image of target during unmanned plane during flying in real time, every in RGBD image Individual pixel includes the depth information of R, G, B Pixel Information and correspondence;Processor, for real Time the depth information of R, G, B Pixel Information and correspondence is processed, identify the gesture of user, And export corresponding control instruction according to the described gesture of user;Flight controller is for according to control Instruction adjusts flight attitude and/or the screening-mode of unmanned plane.
Wherein, RGBD camera is additionally operable to obtain in real time the RGBD image sequence of target, processor According to the gesture of RGBD image sequence identification user, and export control instruction according to gesture.
Wherein, processor selects the screening-mode of RGBD camera always according to control instruction.Wherein clap The pattern of taking the photograph includes the single RGBD image of photographic subjects, multi-angle RGBD image or claps Take the photograph target RGBD image sequence at least one.
Wherein, the gesture of user include but not limited to grasp, naturally raise one's hand, front push away, upper and lower, Wave in left and right.
Wherein, processor is additionally operable to: receive the voice of target input, and according to voice output control Instruction.
The present invention provides a kind of unmanned aerial vehicle (UAV) control method, including: during unmanned plane during flying in real time Obtaining the RGBD image of target, in RGBD image, each pixel includes R, G, B pixel Information and corresponding depth information;In real time to R, G, B Pixel Information and the depth information of correspondence Process, identify the gesture of user, and export corresponding control instruction according to the gesture of user; Flight attitude and/or the screening-mode of unmanned plane is adjusted according to control instruction.
Wherein, during unmanned plane during flying, obtain the step bag of the RGBD image of target in real time Include: obtain the RGBD image sequence of target in real time;In real time R, G, B Pixel Information is carried out Process, identify the gesture of user, and export the step of corresponding control instruction according to the gesture of user Including: according to the gesture of RGBD image sequence identification user, and export control instruction according to gesture.
Wherein, method also includes: select the screening-mode of RGBD camera according to control instruction.Its Middle screening-mode include the single RGBD image of photographic subjects, multi-angle RGBD image or At least one the RGBD image sequence of person's photographic subjects.
Wherein, flight attitude and the step bag of screening-mode of unmanned plane is adjusted according to control instruction Include: adjust flight attitude and/or the screening-mode of multiple unmanned plane according to control instruction simultaneously.
Wherein, flight attitude and the step bag of screening-mode of unmanned plane is adjusted according to control instruction Include: activate at least one in multiple unmanned planes to adjust its flight appearance further according to control instruction State and/or screening-mode.
Wherein, the gesture of user include but not limited to grasp, naturally raise one's hand, front push away, upper and lower, Wave in left and right.
Wherein, in real time R, G, B Pixel Information is processed, identify the gesture of user, and The step of the control instruction of the gesture output correspondence according to user includes: receive the language of target input Sound, and according to voice output control instruction.
By such scheme, the invention has the beneficial effects as follows: flown at unmanned plane by RGBD camera Obtaining RGBD image information during row in real time, RGBD image information includes R, G, B picture Prime information and corresponding depth information;Processor is in real time to R, G, B Pixel Information and correspondence Depth information processes, and identifies the gesture of user, exports corresponding control according to the gesture of user Instruction;Flight controller adjusts flight attitude and/or the screening-mode of unmanned plane according to control instruction, So can reduce the transmission to remote controller control instruction, and then bandwidth is taken by minimizing, improves User experience.
Accompanying drawing explanation
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, embodiment will be retouched below In stating, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below It is only some embodiments of the present invention, for those of ordinary skill in the art, is not paying On the premise of creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.Wherein:
Fig. 1 is the structural representation of the unmanned plane of first embodiment of the invention;
Fig. 2 is the structural representation of the unmanned plane of second embodiment of the invention;
Fig. 3 is the schematic flow sheet of the unmanned aerial vehicle (UAV) control method of first embodiment of the invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, to the technical scheme in the embodiment of the present invention It is clearly and completely described, it is clear that described embodiment is only that a part of the present invention is real Execute example rather than whole embodiment.Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under not making performing creative labour premise, broadly falls into this The scope of invention protection.
Fig. 1 is the structural representation of the unmanned plane of first embodiment of the invention.As it is shown in figure 1, nothing Man-machine (unmanned air vehicle, UAV) 10 includes: RGBD camera 11, flight control Device 12 processed and processor 13.Processor 13 and RGBD camera 11 and flight controller 12 Connect.RGBD camera 11 is for obtaining RGBD image in real time in unmanned plane 10 flight course. In RGBD image, each pixel includes the depth information of R, G, B Pixel Information and correspondence. Wherein the depth information of pixel constitutes the two-dimensional pixel matrix of scene, is called for short depth map.Each picture Element is corresponding with its position in the scene, and has expression from certain reference position to its scene bit The pixel value of the distance put.In other words, depth map has the form of image, and pixel value points out scene The topographical information of object rather than brightness and/or color.Processor 13 in real time to R, G, B Pixel Information and corresponding depth information process, and identify the gesture of user, according to user's The control instruction that gesture output is corresponding.Flight controller 12 is unmanned for adjusting according to control instruction The flight attitude of machine 10 and/or screening-mode.Wherein, the flight attitude of unmanned plane 10 include taking off, Hovering, pitching, rolling, go off course, landing etc..The unmanned plane of the embodiment of the present invention directly passes through RGBD The gesture of the RGBD Image Acquisition user of camera shooting and then the flight attitude of adjustment unmanned plane, energy Enough reduce the transmission to remote controller control instruction, and then bandwidth is taken by minimizing, improves user's body Degree of testing.
Unmanned plane 10 also includes flight assembly and The Cloud Terrace (not shown).RGBD camera 11 is arranged On The Cloud Terrace, The Cloud Terrace is made a response for the attitudes vibration measuring carrier with on stable The Cloud Terrace RGBD camera 11, to facilitate RGBD camera to obtain the RGBD image of target in real time.Flight Assembly can include rotor or fixed-wing, for guaranteeing the normal flight of unmanned plane and flying In journey, flight attitude stablizes.Preferably, as a example by four rotor wing unmanned aerial vehicles, four propellers are ten Font chi structure, four relative rotors have identical direction of rotation, divide two groups, the rotation of two groups Turn direction different.Different from traditional helicopter, quadrotor can only be by changing propeller Speed realize various action.
In embodiments of the present invention, when RGBD camera 11 obtains the RGBD image of target in real time, RGBD camera 11 obtains the RGBD image sequence of multi-angle can pass through multiple RGBD cameras Scene is shot by 11 from different perspectives, it is possible to use single RGBD camera 11 moves not Same position carries out shooting and obtains.Method based on shooting, user can utilize multiple figures of shooting Picture or key frame of video carry out scene rebuilding.Single RGBD camera 11 shooting during, The movement of RGBD camera 11 may be considered the movement at visual angle, if RGBD camera during shooting 11 move horizontally, then can photograph bigger scene, if RGBD camera 11 around Object rotates and shoots, then can photograph the visual angle that same object is different.
In embodiments of the present invention, processor 13 selects RGBD camera always according to control instruction Screening-mode.Wherein screening-mode includes the single RGBD image of photographic subjects, multi-angle RGBD At least one the RGBD image sequence of image or photographic subjects.
In embodiments of the present invention, processor 13 is for carrying out R, G, B Pixel Information in real time Identifying processing, with lock onto target.Specifically, color image segmentation method can be applied, utilize the back of the body Scape Texture Segmentation goes out background image, then utilizes original image subtracting background image to obtain target figure Picture.Certainly the most in other embodiments of the present invention, it is also possible to application additive method identification target, and then Lock onto target.Target is specific human body, and processor 13 can be according to R, G, B Pixel Information The face feature of detection human body is to lock human body.The shooting of target can be wrapped by RGBD camera 11 Include the one in front shooting, side shooting, back side shooting, top shooting or combination in any.
In embodiments of the present invention, deep according to R, G, B Pixel Information and correspondence of processor 13 Degree information identification target is rigid body or non-rigid.Specifically can utilize the depth information profile to target Being identified, distinguish that profile is rigid body or non-rigid, picking out target is that dynamic biological is (such as people Body) or the object of non-rigid.If rigid body is then identified as object, and whether target is led Dynamic motion.Wherein rigid body refers to the object that three dimensional structure will not change along with motion, rather than Rigid body is then contrary, and its three dimensional structure can change along with motion.
Processor 13 is also with rgb color information and target carries out feature identification, identifies thing The profile of body, color information, extract more clarification of objective, improves the recognition accuracy of target. Recognition methods is not limited to the training method commonly used, such as machine learning, degree of depth study scheduling algorithm.Such as Utilize RGB information, dynamic biological target carried out skin color model, or meets human body complexion feature, Then identify whether target is human body, is otherwise non-human.Processor 13 be also compatible with process sound, The information of other sensors such as infrared sensor, is used for identifying and detecting target and feature thereof, improves Accuracy rate.
If recognizing target is human body, then processor 13 identifies trunk, extremity, hands The human body such as portion, face, extract height, brachium, shoulder breadth, hand size, face size, The information such as countenance feature.Owing to human body is non-rigid, in the track up process of long period In, human body can not keep same posture, is susceptible to non-rigid change, needs to carry out model Rebuild, it is to avoid the non-rigid change of data.Processor 13 is first to RGBD camera 11 shooting The depth image of target removes background parts, owing to the depth value of background pixel point is than human body Depth value is big, and processor 13 can select a suitable threshold value, when the depth value of pixel is big When this threshold value, this pixel is labeled as background dot, removes from depth image, obtain people Body cloud data.Cloud data is converted into triangle grid data by processor 13 again, the most permissible Four fields on depth image are utilized to close according to this topology as the topological relation connected, cloud data System generates triangle grid data.Point is gone data to carry out denoising by processor 13 further, tool The multiframe cloud data at each visual angle sum-average arithmetic respectively can be removed big noise by body, then with bilateral Small noise is removed in filtering.The triangle grid data at multiple visual angles is finally spliced by processor 13 Form an entirety together, for carrying out Model Reconstruction.Processor 13 can use iteration to calculate Method rebuilds three-dimensional (3 D) manikin.In iterative algorithm, the data first found out master pattern with collect Between corresponding point, for use as change obligatory point below.Then using obligatory point as energy term, Minimize object function, thus be deformed to master pattern solve scan data, finally obtain deformation After master pattern parameter in human space, calculated human parameters is for changing next time Dai Zhong, completes the reconstruction of three-dimensional (3 D) manikin after so carrying out successive ignition.And then can identify The human bodies such as trunk, extremity, hand, face, extract height, brachium, shoulder breadth, The information such as hand size, face size, countenance feature.
Processor 13 is according to the gesture of the obtaining three-dimensional model user of target, or is obtaining target Threedimensional model time, can only obtain the threedimensional model of the hand difference of user and then obtain user's Gesture.Processor 13 exports control instruction according to the gesture of user.Now the gesture of user is permissible Including the five fingers opening and closing gesture, the five fingers opening and closing gesture includes that the five fingers open gesture and the five fingers Guan Bi gesture.
In embodiments of the present invention, RGBD camera 11 is additionally operable to obtain in real time the RGBD of target Image sequence.Processor 13 is according to the gesture of RGBD image sequence identification user, and according to hands Gesture output control instruction.Specifically, after processor 13 completes the three-dimensional modeling of target, should With skeleton grid and the movement locus of RGBD image sequence generation target of hand region.Wherein Skeleton grid is the triangle gridding formed during target three-dimensional modeling.Processor 13 so according to The attitude action of the gripper path analysis target of target, and according to the attitude motion analysis user of target Gesture.Herein the gesture of user include but not limited to grasp, naturally raise one's hand, front push away, upper and lower, Wave in left and right.The corresponding different control instruction of different gestures, as expression of naturally raising one's hand starts unmanned Machine 10, upper and lower, left and right wave represent adjust unmanned plane 10 heading control instruction etc., It is not described in detail in this.Certainly processor 13 can also be according to the attitude action of target, behavioral pattern Extract identity information Deng analyzing, and then divide into child, old man, adolescence etc..
More specifically realize process as follows:
RGBD camera 11 is for shooting the RGBD image of user, and processor 13 is according to RGBD Image obtains the skeleton grid data of staff in real time, and extracts relevant to unmanned plane 10 operation Skeleton grid data, and by the skeleton grid data of acquisition and unmanned plane 10 prestore with operation The skeleton grid data of relevant gesture model compares.If the skeleton grid data obtained exists The grid data of gesture model in default threshold range, then processor 13 obtain skeleton Lattice data are bound with the gesture model prestored.Processor 13 is obtained by RGBD camera Coordinate about the skeleton grid data of frame every in gesture RGBD image sequence realizes gesture model Action coherent, reaches the motion effect of the gesture of gesture model simulation real user input in scene Really.The action that processor 13 mates with gesture model according to the gesture obtained.The embodiment of the present invention In unmanned plane 10 also include memory element, for store video, processor 13 process target 3D model and the gesture model etc. prestored.
In embodiments of the present invention, nothing can be adjusted at flight controller 12 according to the gesture of user Before man-machine flight attitude and/or screening-mode, according to the different gestures of user, this unmanned plane is entered Line activating.
In embodiments of the present invention, processor 13 is additionally operable to receive the voice of target input, and root According to voice output control instruction, in order to flight controller 12 adjusts unmanned plane 10 according to control instruction Flight attitude.Specifically, seeing Fig. 3, unmanned plane 10 also includes voice acquisition module 14, Voice acquisition module electrically connects with processor 13.Voice acquisition module 14 is used for obtaining user's input Voice, processor 13 according to voice produce control instruction.Flight controller 12 refers to according to control Order adjusts the flight attitude of unmanned plane 10.
Specifically, recognition of face can be carried out by remote control unit and carry out Application on Voiceprint Recognition.Face is known Time other, face database pre-save face information and (has such as detected face figure by infrared signal Picture also retains the physiological feature such as people's interorbital space, human eye length), when gathering, adopted by infrared signal Collection is made comparisons to human face data with the data in face database.If it is by recognition of face, the most right The voice received further determines whether the voice for having voice-operated authority, determines this language Authority corresponding to sound, and carry out speech recognition.Remote control unit is further according to the knot of recognition of face Really, it may be judged whether receive voice.Every has and sends the personnel of phonetic control command and all upload one section Training voice, and then obtain vocal print storehouse.Carrying out vocal print when comparing, the phonetic order person of sending sends language Sound instructs, and this phonetic order is carried out vocal print contrast with voice print database.Believed by vocal print and face Breath searches identity information corresponding in voice print database and face database, thus confirms its authority. Phonetic order is sent to the voice acquisition module 14 of unmanned plane by remote control unit further.Voice obtains Module 14 by the security verification of phonetic order, and by checking preprocessor 13 according to voice Instruction produces control instruction, is sent to the flight controller 12 of unmanned plane.Flight controller 12 According to the operation time needed for the instruction that the symbol lookup of the instruction received is corresponding, then at this language This operation time is added after sound instruction (being actually code).Flight controller 12 refers to according to control Order controls the flight attitude of unmanned plane 10, such as flight speed, flying height, flight path and week Enclose the distance etc. between barrier.
Fig. 3 is the schematic flow sheet of the unmanned aerial vehicle (UAV) control method of first embodiment of the invention.Such as Fig. 3 Shown in, unmanned aerial vehicle (UAV) control method includes:
Step S10: obtain the RGBD image of target, RGBD during unmanned plane during flying in real time In image, each pixel includes the depth information of R, G, B Pixel Information and correspondence.
Wherein the depth information of pixel constitutes the two-dimensional pixel matrix of scene, is called for short depth map.Often Individual pixel is corresponding with its position in the scene, and has expression from certain reference position to it The pixel value of the distance of scape position.In other words, depth map has the form of image, and pixel value is pointed out The topographical information of the object of scene rather than brightness and/or color.
In step slo, can be by using multiple RGBD cameras 11 from different perspectives to field Scape shoots, it is also possible to carry out shooting obtain by single RGBD camera moves different positions ?.Method based on shooting, user can utilize multiple images of shooting or key frame of video to carry out Scene rebuilding.Single RGBD camera is during shooting, and the movement of RGBD camera can be recognized For being the movement at visual angle, if RGBD camera moves horizontally during shooting, then can photograph Bigger scene, if RGBD camera is rotated into row shooting around object, then can photograph same The visual angle that one object is different.
Step S11: in real time the depth information of R, G, B Pixel Information and correspondence is processed, Identify the gesture of user, and export corresponding control instruction according to the described gesture of user.
In step s 11, it is identified R, G, B Pixel Information in real time processing, with locking Target.Specifically, color image segmentation method can be applied, utilize background texture to be partitioned into background Image, then utilizes original image subtracting background image to obtain target image.Certainly the present invention its In his embodiment, it is also possible to application additive method identification target, and then lock onto target.
In step s 11, it is also with rgb color information and target is carried out feature identification, Identifying the profile of object, color information, extract more clarification of objective, the identification improving target is accurate Really rate.Recognition methods is not limited to the training method commonly used, such as machine learning, degree of depth study scheduling algorithm. Such as utilize RGB information, dynamic biological target is carried out skin color model, or meets human body complexion Feature, then identify whether target is human body, is otherwise non-human.It addition, be also compatible with process sound, The information of other sensors such as infrared sensor, is used for identifying and detecting target and feature thereof, improves Accuracy rate.
When depth information according to R, G, B Pixel Information and correspondence sets up the threedimensional model of target, First the depth image of the target of RGBD camera shooting is removed background parts, due to background pixel The depth value of point is bigger than the depth value of human body, can select a suitable threshold value, work as pixel When the depth value of point is more than this threshold value, this pixel is labeled as background dot, from depth image Remove, obtain human body data cloud.Again cloud data is converted into triangle grid data, specifically may be used To utilize four fields on depth image as the topological relation connected, cloud data is according to this topology Relation generates triangle grid data.Point go data carry out denoising further, specifically can be by The multiframe cloud data at each visual angle sum-average arithmetic respectively removes big noise, then removes with bilateral filtering Small noise.Finally be stitched together one entirety of formation by the triangle grid data at multiple visual angles, For carrying out Model Reconstruction.Iterative algorithm can be used to rebuild three-dimensional (3 D) manikin.Calculate in iteration In method, first find out the corresponding point between master pattern and the data collected, for use as change below Change obligatory point.Then using obligatory point as energy term, object function is minimized, thus by master die Type is deformed to solve scan data, finally obtains the ginseng in human space of the master pattern after deformation Number, calculated human parameters, in next iteration, completes after so carrying out successive ignition The reconstruction of three-dimensional (3 D) manikin.
In step s 11, according to the gesture of the obtaining three-dimensional model user of target, or obtaining During the threedimensional model of target, can only obtain the threedimensional model of the hand difference of user and then obtain and use The gesture at family, thus export control instruction according to the gesture of user.Now the gesture of user can be wrapped Including the five fingers opening and closing gesture, the five fingers opening and closing gesture includes that the five fingers open gesture and the five fingers Guan Bi gesture.
In embodiments of the present invention, in step slo, it is also possible to obtain the RGBD of target in real time Image sequence.The most in step s 11, according to the gesture of RGBD image sequence identification user, And export control instruction according to gesture.Specifically, after the three-dimensional modeling completing target, application The skeleton grid of hand region and RGBD image sequence generate the movement locus of target.Wherein bone Frame grid is the triangle gridding formed during target three-dimensional modeling.And then according to the motion rail of target Mark analyzes the attitude action of target, and the gesture of the attitude motion analysis user according to target.Herein The gesture of user includes but not limited to grasp, naturally raises one's hand, front push away, wave in upper and lower, left and right.
And it is as follows more specifically to realize process:
By the RGBD image of RGBD camera shooting user and then real according to RGBD image Time obtain the skeleton grid data of staff, and extract and the unmanned plane relevant skeleton grid number of operation According to, and by the skeleton grid data of acquisition to unmanned plane prestores with the relevant gesture model of operation Skeleton grid data compare.If the skeleton grid data obtained is at the grid of gesture model Data are in default threshold range, then the skeleton grid data obtained enters with the gesture model prestored Row binding.The bone about frame every in gesture RGBD image sequence obtained by RGBD camera The coordinate of frame grid data realizes the coherent of gesture model action, reaches gesture model simulation in scene The movement effects of the gesture of real user input.According to moving that the gesture obtained is mated with gesture model Make.Unmanned plane in the embodiment of the present invention can also store video, process the target 3D mould obtained Type and the gesture model etc. prestored.
Step S12: adjust flight attitude and/or the screening-mode of unmanned plane according to control instruction.
Wherein, the flight attitude of unmanned plane includes taking off, hovers, pitching, rolling, goes off course, drops Fall.The corresponding different control instruction of different gestures, as naturally raise one's hand, expression starts unmanned plane, Wave and represent the control instruction etc. adjusting unmanned plane during flying direction in upper and lower, left and right, the most detailed at this State.
In step s 12, according to control instruction adjust the flight attitude of unmanned plane and screening-mode with When target is tracked shooting, target can differ with user, can be now a RGBD Camera shoots, and user and target are in the same visual field.Unmanned plane can include at least two Individual RGBD camera, a RGBD camera is used for obtaining gesture, and a RGBD camera is used for Photographic subjects.One RGBD camera is used for obtaining gesture, and a RGBD camera is used for shooting Target.
In step s 12, it is also possible to adjust the flight appearance of multiple unmanned plane according to control instruction simultaneously State and screening-mode to be tracked shooting to target.Now user can be controlled by gesture simultaneously Target is shot by multiple unmanned planes, it is also possible to the same time only controls one of them unmanned plane pair Target shoots.Specifically can activate wherein one or more unmanned planes, then by first gesture By second gesture, the one or more unmanned planes activated are controlled, such as, can apply the centre of the palm One unmanned plane is activated to facilitate further by the first gesture just waved unmanned plane Control it, and apply wave after drop-down gesture of clenching fist cancel this unmanned plane swashed Live.In step s 12, it is also possible to activate whole unmanned planes by the 3rd gesture, the most defeated Go out a second gesture and all activated unmanned plane can be carried out Synchronization Control.3rd gesture is preferred Clench fist for hands held high three-sixth turn.Certainly can also be applied it in other embodiments of the invention Unmanned plane is activated or cancels activation by his gesture, and this is not restricted.
In embodiments of the present invention, the RGBD Image Acquisition directly shot by RGBD camera The gesture of user and then adjust flight attitude and/or the screening-mode of unmanned plane, it is possible to reduce remote control The transmission of device control instruction, so reduce bandwidth is taken, improve user experience.
In embodiments of the present invention, also receive the voice of target input, and according to voice output control Instruction, in order to adjust the flight attitude of unmanned plane according to control instruction.Specifically, can be by distant Control device carries out recognition of face and carries out Application on Voiceprint Recognition.During recognition of face, in face database in advance Preserve face information (such as detect facial image by infrared signal and retain people's interorbital space, human eye The physiological features such as length), when gathering, collect human face data and human face data by infrared signal Data in storehouse are made comparisons.If by recognition of face, then the voice received is determined whether Whether it is the voice with voice-operated authority, determines the authority corresponding to this voice, and carry out Speech recognition.Remote control unit is further according to the result of recognition of face, it may be judged whether receive voice. Every has and sends the personnel of phonetic control command and all upload one section of training voice, and then obtains vocal print Storehouse.Carrying out vocal print when comparing, the phonetic order person of sending sends phonetic order, this phonetic order by with Voice print database carries out vocal print contrast.Voice print database and face is searched by vocal print and face information Identity information corresponding in data base, thus confirm its authority.Voice is referred to by remote control unit further Order is sent to unmanned plane with the security verification to phonetic order, and according to voice after by checking Instruction produces control instruction.During operation needed for the instruction that symbol lookup according to control instruction is corresponding Between, then after this phonetic order (being actually code), add this operation time.Thus root The flight attitude of unmanned plane 10 is controlled, such as flight speed, flying height, flight according to control instruction Distance etc. between track and peripheral obstacle, so can improve the Experience Degree of user.
In embodiments of the present invention, it is also possible to select the shooting mould of RGBD camera according to control instruction Formula.Wherein screening-mode includes that the single RGBD image of photographic subjects, multi-angle RGBD are schemed At least one the RGBD image sequence of picture or photographic subjects.
In sum, the present invention is obtained during unmanned plane during flying in real time by RGBD camera RGBD image information, RGBD image information includes the deep of R, G, B Pixel Information and correspondence Degree information;The depth information of R, G, B Pixel Information and correspondence is processed by processor in real time, Identify the gesture of user, and export corresponding control instruction according to the gesture of user;Flight controller Adjust flight attitude and/or the screening-mode of unmanned plane according to control instruction, so can reduce distant Control the transmission of device control instruction, and then bandwidth is taken by minimizing, improves user experience.
The foregoing is only embodiments of the invention, not thereby limit the scope of the claims of the present invention, Every equivalent structure utilizing description of the invention and accompanying drawing content to be made or equivalence flow process conversion, or Directly or indirectly being used in other relevant technical fields, the patent being the most in like manner included in the present invention is protected In the range of protecting.

Claims (12)

1. a unmanned plane, it is characterised in that described unmanned plane includes RGBD camera, flight Controller and processor, described processor and described RGBD camera and described flight controller Connect, wherein:
Described RGBD camera, for obtaining the RGBD of target in real time during unmanned plane during flying Image, in described RGBD image, each pixel includes R, G, B Pixel Information and correspondence Depth information;
Described processor, believes the degree of depth of described R, G, B Pixel Information and correspondence in real time Breath processes, and identifies the gesture of user, and exports corresponding control according to the described gesture of user Instruction;
Described flight controller for adjusting the flight appearance of described unmanned plane according to described control instruction State and/or screening-mode.
Unmanned plane the most according to claim 1, it is characterised in that described RGBD camera Being additionally operable to obtain in real time the RGBD image sequence of described target, described processor is according to described The gesture of RGBD image sequence identification user, and export described control instruction according to described gesture.
Unmanned plane the most according to claim 1, it is characterised in that described processor always according to Described control instruction selects the screening-mode of described RGBD camera.Wherein said screening-mode includes Shoot the single RGBD image of described target, multi-angle RGBD image or shooting described At least one the RGBD image sequence of target.
Unmanned plane the most according to claim 1, it is characterised in that the gesture bag of described user Include but be not limited to grasp, naturally raise one's hand, front push away, wave in upper and lower, left and right.
Unmanned plane the most according to claim 1, it is characterised in that described processor is additionally operable to: Receive the voice of described target input, and according to control instruction described in described voice output.
6. a unmanned aerial vehicle (UAV) control method, it is characterised in that described method includes:
Obtaining the RGBD image of target during unmanned plane during flying in real time, described RGBD schemes In Xiang, each pixel includes the depth information of R, G, B Pixel Information and correspondence;
In real time the depth information of described R, G, B Pixel Information and correspondence is processed, identify The gesture of user, and export corresponding control instruction according to the described gesture of user;
Flight attitude and/or the screening-mode of described unmanned plane is adjusted according to described control instruction.
Method the most according to claim 6, it is characterised in that described in unmanned plane during flying mistake The step of the RGBD image obtaining target in journey in real time includes: obtain described target in real time RGBD image sequence;
Described in real time described R, G, B Pixel Information is processed, identifies the gesture of user, And include according to the step of the control instruction of the described gesture output correspondence of user: according to described The gesture of RGBD image sequence identification user, and export described control instruction according to described gesture.
Method the most according to claim 6, it is characterised in that described method also includes: root The screening-mode of described RGBD camera is selected according to described control instruction.Wherein said screening-mode bag Include the shooting single RGBD image of described target, multi-angle RGBD image or shooting institute State target RGBD image sequence at least one.
Method the most according to claim 6, it is characterised in that described refer to according to described control The step of flight attitude and/or screening-mode that order adjusts described unmanned plane includes:
Adjust flight attitude and the shooting mould of multiple described unmanned plane according to described control instruction simultaneously Formula.
Method the most according to claim 9, it is characterised in that described according to described control The step of flight attitude and/or screening-mode that instruction adjusts described unmanned plane includes:
At least one in multiple described unmanned planes is activated to adjust further according to described control instruction Its flight attitude whole and screening-mode.
11. methods according to claim 6, it is characterised in that the gesture bag of described user Include but be not limited to grasp, naturally raise one's hand, front push away, wave in upper and lower, left and right.
12. methods according to claim 6, it is characterised in that described in real time to described R, G, B Pixel Information processes, and identifies the gesture of user, and the described gesture according to user is defeated The step of the control instruction going out correspondence includes: receive the voice of described target input, and according to described Control instruction described in voice output.
CN201610200004.3A 2016-03-31 2016-03-31 Unmanned plane and control method of unmanned plane Pending CN105892474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610200004.3A CN105892474A (en) 2016-03-31 2016-03-31 Unmanned plane and control method of unmanned plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610200004.3A CN105892474A (en) 2016-03-31 2016-03-31 Unmanned plane and control method of unmanned plane

Publications (1)

Publication Number Publication Date
CN105892474A true CN105892474A (en) 2016-08-24

Family

ID=57013331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610200004.3A Pending CN105892474A (en) 2016-03-31 2016-03-31 Unmanned plane and control method of unmanned plane

Country Status (1)

Country Link
CN (1) CN105892474A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106483978A (en) * 2016-12-09 2017-03-08 佛山科学技术学院 A kind of unmanned machine operation voice guide devices and methods therefor
CN106598431A (en) * 2016-11-30 2017-04-26 中国航空工业集团公司沈阳飞机设计研究所 Device for quickly guiding instruction transmission of unmanned aerial vehicle based on manned aerial vehicle
CN106657973A (en) * 2017-01-21 2017-05-10 上海量明科技发展有限公司 Method and system for displaying image
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
CN106774947A (en) * 2017-02-08 2017-05-31 亿航智能设备(广州)有限公司 A kind of aircraft and its control method
CN107272724A (en) * 2017-08-04 2017-10-20 南京华捷艾米软件科技有限公司 A kind of body-sensing flight instruments and its control method
CN107438804A (en) * 2016-10-19 2017-12-05 深圳市大疆创新科技有限公司 A kind of Wearable and UAS for being used to control unmanned plane
CN107783553A (en) * 2016-08-26 2018-03-09 北京臻迪机器人有限公司 Control the method, apparatus and system of unmanned plane
WO2018058264A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Video-based control method, device, and flying apparatus
CN108121356A (en) * 2016-11-30 2018-06-05 三星电子株式会社 Unmanned vehicle and the method that it is controlled to fly
CN108153325A (en) * 2017-11-13 2018-06-12 上海顺砾智能科技有限公司 The control method and device of Intelligent unattended machine
CN108227735A (en) * 2016-12-22 2018-06-29 Tcl集团股份有限公司 Method, computer-readable medium and the system of view-based access control model flight self-stabilization
CN108460354A (en) * 2018-03-09 2018-08-28 深圳臻迪信息技术有限公司 Unmanned aerial vehicle (UAV) control method, apparatus, unmanned plane and system
WO2018191989A1 (en) * 2017-04-22 2018-10-25 深圳市大疆灵眸科技有限公司 Capture control method and apparatus
CN109196438A (en) * 2018-01-23 2019-01-11 深圳市大疆创新科技有限公司 A kind of flight control method, equipment, aircraft, system and storage medium
CN109270954A (en) * 2018-10-30 2019-01-25 西南科技大学 A kind of unmanned plane interactive system and its control method based on gesture recognition
CN109492578A (en) * 2018-11-08 2019-03-19 北京华捷艾米科技有限公司 A kind of gesture remote control method and device based on depth camera
CN110300938A (en) * 2016-12-21 2019-10-01 杭州零零科技有限公司 System and method for exempting from the interaction of controller formula user's unmanned plane
CN110347260A (en) * 2019-07-11 2019-10-18 歌尔科技有限公司 A kind of augmented reality device and its control method, computer readable storage medium
CN110597277A (en) * 2019-08-20 2019-12-20 杜书豪 Intelligent flight equipment and man-machine interaction method thereof
CN111031202A (en) * 2019-11-25 2020-04-17 长春理工大学 Intelligent photographing unmanned aerial vehicle based on four rotors, intelligent photographing system and method
CN111123965A (en) * 2019-12-24 2020-05-08 中国航空工业集团公司沈阳飞机设计研究所 Somatosensory operation method and operation platform for aircraft control
CN112783154A (en) * 2020-12-24 2021-05-11 中国航空工业集团公司西安航空计算技术研究所 Multi-intelligent task processing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779347A (en) * 2012-06-14 2012-11-14 清华大学 Method and device for tracking and locating target for aircraft
US20130136300A1 (en) * 2011-11-29 2013-05-30 Qualcomm Incorporated Tracking Three-Dimensional Objects
TW201339903A (en) * 2012-03-26 2013-10-01 Hon Hai Prec Ind Co Ltd System and method for remotely controlling AUV
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104808799A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
CN105120257A (en) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 Vertical depth sensing device based on structured light coding
CN105138126A (en) * 2015-08-26 2015-12-09 小米科技有限责任公司 Unmanned aerial vehicle shooting control method and device and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136300A1 (en) * 2011-11-29 2013-05-30 Qualcomm Incorporated Tracking Three-Dimensional Objects
TW201339903A (en) * 2012-03-26 2013-10-01 Hon Hai Prec Ind Co Ltd System and method for remotely controlling AUV
CN102779347A (en) * 2012-06-14 2012-11-14 清华大学 Method and device for tracking and locating target for aircraft
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
CN104808799A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
CN105120257A (en) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 Vertical depth sensing device based on structured light coding
CN105138126A (en) * 2015-08-26 2015-12-09 小米科技有限责任公司 Unmanned aerial vehicle shooting control method and device and electronic device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
万刚 等: "《无人机测绘技术及应用》", 31 December 2015, 北京:测绘出版社 *
上海海事大学 等: "《中国物流科技发展报告 2014-2015》", 31 October 2015, 上海:上海浦江教育出版社 *
程多祥: "《无人机移动测量数据快速获取与处理》", 30 September 2015, 北京:航空工业出版社 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107783553A (en) * 2016-08-26 2018-03-09 北京臻迪机器人有限公司 Control the method, apparatus and system of unmanned plane
CN108351651A (en) * 2016-09-27 2018-07-31 深圳市大疆创新科技有限公司 A kind of control method, device and aircraft based on image
CN108351651B (en) * 2016-09-27 2024-06-21 深圳市大疆创新科技有限公司 Control method and device based on images and aircraft
WO2018058264A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Video-based control method, device, and flying apparatus
CN107438804A (en) * 2016-10-19 2017-12-05 深圳市大疆创新科技有限公司 A kind of Wearable and UAS for being used to control unmanned plane
CN110045745A (en) * 2016-10-19 2019-07-23 深圳市大疆创新科技有限公司 It is a kind of for controlling the wearable device and UAV system of unmanned plane
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
WO2018098904A1 (en) * 2016-11-29 2018-06-07 深圳市元征科技股份有限公司 Unmanned aerial vehicle control method and device
CN106598431A (en) * 2016-11-30 2017-04-26 中国航空工业集团公司沈阳飞机设计研究所 Device for quickly guiding instruction transmission of unmanned aerial vehicle based on manned aerial vehicle
US11294369B2 (en) 2016-11-30 2022-04-05 Samsung Electronics Co., Ltd. Unmanned aerial vehicle and method for controlling flight of the same
CN108121356A (en) * 2016-11-30 2018-06-05 三星电子株式会社 Unmanned vehicle and the method that it is controlled to fly
CN106483978A (en) * 2016-12-09 2017-03-08 佛山科学技术学院 A kind of unmanned machine operation voice guide devices and methods therefor
CN110687902A (en) * 2016-12-21 2020-01-14 杭州零零科技有限公司 System and method for controller-free user drone interaction
CN110300938A (en) * 2016-12-21 2019-10-01 杭州零零科技有限公司 System and method for exempting from the interaction of controller formula user's unmanned plane
CN110687902B (en) * 2016-12-21 2020-10-20 杭州零零科技有限公司 System and method for controller-free user drone interaction
CN108227735B (en) * 2016-12-22 2021-08-10 Tcl科技集团股份有限公司 Method, computer readable medium and system for self-stabilization based on visual flight
CN108227735A (en) * 2016-12-22 2018-06-29 Tcl集团股份有限公司 Method, computer-readable medium and the system of view-based access control model flight self-stabilization
CN106657973A (en) * 2017-01-21 2017-05-10 上海量明科技发展有限公司 Method and system for displaying image
CN106774947A (en) * 2017-02-08 2017-05-31 亿航智能设备(广州)有限公司 A kind of aircraft and its control method
WO2018191989A1 (en) * 2017-04-22 2018-10-25 深圳市大疆灵眸科技有限公司 Capture control method and apparatus
CN107272724A (en) * 2017-08-04 2017-10-20 南京华捷艾米软件科技有限公司 A kind of body-sensing flight instruments and its control method
CN108153325A (en) * 2017-11-13 2018-06-12 上海顺砾智能科技有限公司 The control method and device of Intelligent unattended machine
WO2019144295A1 (en) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 Flight control method and device, and aircraft, system and storage medium
CN109196438A (en) * 2018-01-23 2019-01-11 深圳市大疆创新科技有限公司 A kind of flight control method, equipment, aircraft, system and storage medium
CN108460354A (en) * 2018-03-09 2018-08-28 深圳臻迪信息技术有限公司 Unmanned aerial vehicle (UAV) control method, apparatus, unmanned plane and system
CN108460354B (en) * 2018-03-09 2020-12-29 深圳臻迪信息技术有限公司 Unmanned aerial vehicle control method and device, unmanned aerial vehicle and system
CN109270954A (en) * 2018-10-30 2019-01-25 西南科技大学 A kind of unmanned plane interactive system and its control method based on gesture recognition
CN109492578A (en) * 2018-11-08 2019-03-19 北京华捷艾米科技有限公司 A kind of gesture remote control method and device based on depth camera
CN110347260B (en) * 2019-07-11 2023-04-14 歌尔科技有限公司 Augmented reality device, control method thereof and computer-readable storage medium
CN110347260A (en) * 2019-07-11 2019-10-18 歌尔科技有限公司 A kind of augmented reality device and its control method, computer readable storage medium
CN110597277A (en) * 2019-08-20 2019-12-20 杜书豪 Intelligent flight equipment and man-machine interaction method thereof
CN111031202A (en) * 2019-11-25 2020-04-17 长春理工大学 Intelligent photographing unmanned aerial vehicle based on four rotors, intelligent photographing system and method
CN111123965A (en) * 2019-12-24 2020-05-08 中国航空工业集团公司沈阳飞机设计研究所 Somatosensory operation method and operation platform for aircraft control
CN112783154A (en) * 2020-12-24 2021-05-11 中国航空工业集团公司西安航空计算技术研究所 Multi-intelligent task processing method and system

Similar Documents

Publication Publication Date Title
CN105892474A (en) Unmanned plane and control method of unmanned plane
CN105847684A (en) Unmanned aerial vehicle
CN105912980B (en) Unmanned plane and UAV system
CN105786016B (en) The processing method of unmanned plane and RGBD image
CN205453893U (en) Unmanned aerial vehicle
CN205693767U (en) Uas
US11302026B2 (en) Attitude recognition method and device, and movable platform
CN108303994B (en) Group control interaction method for unmanned aerial vehicle
CN105930767B (en) A kind of action identification method based on human skeleton
Natarajan et al. Hand gesture controlled drones: An open source library
CN105159452B (en) A kind of control method and system based on human face modeling
CN105825268A (en) Method and system for data processing for robot action expression learning
CN105717933A (en) Unmanned aerial vehicle and unmanned aerial vehicle anti-collision method
CN113158833B (en) Unmanned vehicle control command method based on human body posture
MohaimenianPour et al. Hands and faces, fast: mono-camera user detection robust enough to directly control a UAV in flight
Fu et al. Vision-based obstacle avoidance for flapping-wing aerial vehicles
CN108089695A (en) A kind of method and apparatus for controlling movable equipment
CN105930766A (en) Unmanned plane
CN107363834A (en) A kind of mechanical arm grasping means based on cognitive map
Silva et al. Landing area recognition by image applied to an autonomous control landing of VTOL aircraft
Zhou et al. Real-time object detection and pose estimation using stereo vision. An application for a Quadrotor MAV
Proctor et al. Vision‐only control and guidance for aircraft
Piponidis et al. Towards a Fully Autonomous UAV Controller for Moving Platform Detection and Landing
Zhang et al. Unmanned aerial vehicle perception system following visual cognition invariance mechanism
CN116009583A (en) Pure vision-based distributed unmanned aerial vehicle cooperative motion control method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160824

RJ01 Rejection of invention patent application after publication