CN105137973B - A kind of intelligent robot under man-machine collaboration scene hides mankind's method - Google Patents
A kind of intelligent robot under man-machine collaboration scene hides mankind's method Download PDFInfo
- Publication number
- CN105137973B CN105137973B CN201510518563.4A CN201510518563A CN105137973B CN 105137973 B CN105137973 B CN 105137973B CN 201510518563 A CN201510518563 A CN 201510518563A CN 105137973 B CN105137973 B CN 105137973B
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- mtr
- msub
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Manipulator (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses the intelligent robot under a kind of man-machine collaboration scene to hide mankind's method, comprises the following steps:1) the robot cylinder model of virtual scene is built by robot D H parameters;2) RGB image of real scene and the skeleton data of operator are obtained in real time by Kinect, and builds operator's cylinder model of virtual scene;3) demarcation of robot and operator are corresponding in real scene and virtual scene;4) real time collision detection of robot model and operator's model.Using the inventive method, operator can the free movement in robot manipulating task space, and robot can then carry out the avoidance of active according to the information of the Kinect operators obtained.
Description
Technical field
The present invention relates to the technical field of robot motion, refers in particular to the intelligent robot under a kind of man-machine collaboration scene
Hide mankind's method.
Background technology
Intelligent robot and other field intersect and merged, and generating many needs people and robot cooperated scene.
Intelligent robot is participated in the operating environment of people, and the work of people can also be effectively liberated on the premise of operating efficiency is ensured
Power, people can also be in particular cases replaced to complete the task of excessive risk.Under the background that people and robot closely cooperate, how
The problem of safety of guarantor is most important.This invention proposes the modeling of robot under a kind of man-machine collaboration scene and kept away
The method of barrier, this method borrow Kinect sensor acquisition operations person according to D-H parameters structure robot of robot cylinder model
Bone site, the cylinder model of operator is built with this, and machine is made according to the collision detection result of robot and operator
People intelligently identifies and hides human body.
When building Robot Virtual scene, it is common practice to utilize the modeling tools such as 3ds Max structure and true field
The completely corresponding virtual robot model of the robot of scape, but this method does not have versatility, and modeling cost is of a relatively high,
Therefore it is necessary by the intrinsic D-H parameters of robot to build robot model.
In man-machine collaboration scene, position and action message of the robot according to human body, avoidance is made, avoids touching
Hit.But collision prevention of traditional robot collision avoidance research mainly to object, different from environmental objects, people possesses the structure of itself
Feature, body state and motion also have unpredictability, therefore robot needs to identify human body in real time.
Before Kinect appearance, the human body information acquisition technique based on more vision cameras is relatively popular, but should
Method needs fluoroscopy images, and relative cost is also higher.In more vision camera human body recognition technologies based on mark, human body will be worn
Specific clothes or equipment, in limited state in active procedure, limits the application of the technology.Based on unmarked
Human body information extraction acquisition technique then needs to be matched human body image, three-dimensional information with prioris such as object functions,
Enough sample sizes are needed with process, otherwise can cause to identify that accuracy is relatively low, also relatively be difficult to ensure card real-time.
The content of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide the intelligent robot under a kind of man-machine collaboration scene
Hide mankind's method, operator can the free movement in robot manipulating task space, the operation that robot obtains according to Kinect
The information of member carries out the avoidance of active.
To achieve the above object, technical scheme provided by the present invention is:A kind of robot intelligence under man-machine collaboration scene
Mankind's method can be hidden, comprised the following steps:
1) the robot cylinder model of virtual scene is built by robot D-H parameters, it is as follows:
Assuming that robot is made up of a series of connecting rods and joint with arbitrary form, what D-H parameters represented is robot
Relation between joint and connecting rod;In order to be modeled to robot, joint is modeled using point, cylinder enters to connecting rod
Row modeling;It is related to the positive kinematics of robot, by the initial angle in each joint of robot, to solve the seat in each joint
The transformation relation of the relative basis coordinates system of mark system;Give each joint to define a right-handed coordinate system, the coordinate system in a joint and
Homogeneous transformation between the coordinate system in next joint is called A matrixes, A1Represent first joint relative in basis coordinates system
Position and posture, then A2Then it is expressed as position and posture of second joint relative to first joint, and second joint
It can be made up of relative in the position of basis coordinates system and posture the matrix product shown in equation below (1):
T2=A1A2 (1)
By that analogy, n-th of joint is relative to the formula in the position of basis coordinates system and posture:
In formula,The posture in n-th of joint is represented,Represent position of n-th of joint relative to basis coordinates system;Wherein
AiIt can be represented according to D-H parameters:
θi,di,ai,αiFor joint of robot i D-H parameters;
Coordinate system when building the robot cylinder model of virtual scene using robot basis coordinates system as model, solve each
Joint relative to basis coordinates system positionConnecting rod between adjacent segment is modeled using cylinder, cylinder
The upper bottom surface center of circle be two joint point position, cylinder radius is adjusted according to actual conditions, builds the machine of 6DOF
Device people's cylinder model;
2) RGB image of real scene and the skeleton data of operator are obtained in real time by Kinect, and builds virtual field
Operator's cylinder model of scape, it is as follows:
Operator is obtained very in real time when entering the operation interval of robot by being fixed on the Kinect before operator
The RGB image of real field scape and the skeleton data of operator, realize tracking and positioning to operator;Kinect has three kinds of shootings
Head:A kind of the RGB color camera for being used to gather coloured image and two kinds of infrared cameras for being used for sampling depth image;
Acquisition for real scene RGB image, Kinect is put into some position in the environment, opens coloured image
NUI_INITIALIZE_FLAG_USES_COLOR initializes Kinect, passes through the coloured image frame data of acquisition, uses
OpenCV is drawn out;
Acquisition for skeleton data, open skeleton data NUI_INITIALIZE_FLAG_USES_SKELETON and come just
Beginningization Kinect, when people is in standing state, Kinect can get the positions of 20 artis of people to represent people's
Skeleton;15 artis of extraction build the cylinder model of operator in virtual scene, this 15 artis from top to bottom and from
It is left-to-right to be ordered as:1. head;2. shoulder center;3. right shoulder;4. right hand elbow;5. the right hand;6. left shoulder;7. left hand elbow;8. left hand;9. hip
Articulation center;10. right hip;Right knee;Right crus of diaphragm;Left hip;Left knee;Left foot;The position of these artis is all
Relative to the position of Kinect coordinate systems;
Coordinate system when building operator's cylinder model using Kinect coordinate systems as model, by Kinect to depth map
The processing of picture, obtain the position of 15 artis of operatorArtis adjacent in human skeleton is used
Cylinder is modeled, and the upper bottom surface center of circle of cylinder is the position of two joint point, and cylinder radius enters according to actual conditions
Row adjustment;
3) demarcation of robot and operator are corresponding in real scene and virtual scene, as follows:
In above-mentioned two step, the modeling to robot in virtual scene is the seat using robot basis coordinates system as model
Mark system, the modeling to operator in virtual scene is the coordinate system using Kinect coordinate systems as model, in order to by virtual scene and
Real scene carries out correspondingly, choosing a coordinate system in real scene, referred to as world coordinate system E1, and robot basis coordinates system claims
For coordinate system E2, Kinect coordinate systems are referred to as coordinate system E3;
Relation between robot basis coordinates system E2 and world coordinate system E1 represents with spin matrix R and translation matrix T,
If coordinates of robot some artis P under robot basis coordinates system isSeat under world coordinate system
It is designated asThen the relation between them is:
In formula,E2RE1For 3X3 matrixes, attitudes vibration matrix of the robot basis coordinates system relative to world coordinate system is represented;E2TE1For 3X1 matrixes, change in location matrix of the robot basis coordinates system relative to world coordinate system is represented;M2For 4X4 matrixes, table
Show pose transformation matrices of the robot basis coordinates system relative to world coordinate system;
Similarly, the relation between Kinect coordinate systems E3 and world coordinate system E1 also with spin matrix R and translation matrix T come
Represent, if coordinates of operator some artis P ' under Kinect coordinate systems isUnder world coordinate system
Coordinate beThen the relation between them is:
After robot basis coordinates system and Kinect coordinate systems are transformed into world coordinate system, next just it is to determine true
The mapping relations of real field scape and virtual scene and weigh error between them;If the certain point P in virtual scenevCoordinate
ForThen arbitrary point P in virtual scenevPoint P in corresponding real scenerIn real scene RGB image
Coordinate isA mapping f () then be present so that arbitrary point P in virtual scenevAnd its in corresponding real scene
Point Pr, following relation be present:
Pr=f (Pv)+ε (6)
In formula, ε is error corresponding to virtual scene and real scene;
The bone three-dimensional data and coloured image 2-D data of operator is obtained for Kinect, Kinect SDK are provided
Mutual conversion between the two, that is, provide the mapping f of virtual scene operator and real scene operatorperson;
NuiTransformSkeletonToDepthImage provides three-dimensional skeleton data to the mapping relationship f of two-dimensional depth image1,
NuiImageGetColorPixelCoordinatesFromDepthPixel provides two-dimensional depth image to two-dimensional color figure
The mapping f of picture2, it is possible thereby to learn three-dimensional skeleton data to the mapping relations of Two-dimensional Color Image:
fperson=f2·f1 (7)
So far, the operator of virtual scene and the operator of real scene are had been achieved with correspondingly, for robot, due to machine
Device people basis coordinates system and Kinect coordinate systems have been transformed into world coordinate system, then robot and the virtual scene residing for operator
As being with the coordinate system of real scene, then the mapping relationship f of virtual scene robot and real scene robotrobotWith void
Intend the mapping relationship f of scene operation person and real scene operatorpersonIt is the same:
frobot=fperson (8)
Then for whole virtual scene and real scene, the formula of the mapping relationship f between them is:
F=frobot=fperson=f2·f1 (9)
With reference to formula above (6) and formula (9), the error ε of virtual scene and real scene can must be weighed:
ε=Pr-f2·f1(Pv) (10)
4) real time collision detection of robot model and operator's model is as follows:
When in the operating environment that operator enters robot, the information of operator has for robot can not be pre-
Intellectual, the efficiency and precision of collision detection can be directly influenced to the modeling strategy of operator and robot;To operator and machine
Device is modeled using relatively regular geometry cylinder to robot and operator per capita, based on cylinder bounding box
Collision detection concrete scheme is as follows:
If A, B be a cylinder the round heart in upper bottom, C, D be another cylinder the round heart in upper bottom, space line lAB,
lCDFor the axis of two cylinders, the main situation for considering two straight line antarafacials, two different surface beeline l are asked forAB,lCDCommon vertical line and hang down
Sufficient P, Q, if P or Q carry out border detection, chosen near the end of other straight line not in line segment AB or line segment CD
Point substitutes P or Q points;
If P, Q in line segment AB and line segment CD, then two cylinders can only be that side is intersected, and now only need to judge line segment
The radius sum of PQ length and two cylinders;
If there is a point to fall in line segment in P, Q, a point falls on end points, then two cylinders can only side and end face phase
Hand over, now find on side near the bus of end face, judge whether bus intersects with end face;
If P, Q on the end points of line segment, then two cylinders can only be that end face is intersected, now only need to judge spatially two
Whether individual disc is intersecting.
The present invention compared with prior art, has the following advantages that and beneficial effect:
The inventive method is modeled to robot and operator using relatively regular geometry shape cylinder, have compared with
Good versatility and applicability, while cylinder volume modeling can improve the efficiency and precision of collision detection to a certain extent.Make
Operator can the free movement in robot manipulating task space, robot can according to Kinect obtain operator information
The avoidance of active is carried out, meets that operation requires.
Brief description of the drawings
Fig. 1 is the system construction drawing of the inventive method.
Fig. 2 is 6DOF robot cylinder model figure.
Fig. 3 is operator's cylinder model figure.
Embodiment
With reference to specific embodiment, the invention will be further described.
The intelligent robot under man-machine collaboration scene described in the present embodiment hides mankind's method, allows operator in machine
Free movement in device people's working space, robot obtain the avoidance of the information progress active of operator, its system according to Kinect
Structure chart is as shown in Figure 1.The key of the inventive method is structure virtual robot model corresponding with true man-machine collaboration scene
With operator's model and robot model and the collision checking method of operator's model, it comprises the following steps:
1) the robot cylinder model of virtual scene is built by robot D-H parameters, it is as follows:
Assuming that robot is made up of a series of connecting rods and joint with arbitrary form, what D-H parameters represented is robot
Relation between joint and connecting rod.In order to be modeled to robot, we are modeled using point to joint, and cylinder is to even
Bar is modeled.It is related to the positive kinematics of robot in the present invention, by the initial angle in each joint of robot, to solve
Transformation relation of the coordinate system in each joint with respect to basis coordinates system.Such as:A right-handed coordinate system is defined to each joint, one
Homogeneous transformation between the coordinate system in individual joint and the coordinate system in next joint is called A matrixes.A1Represent first joint phase
For in the position of basis coordinates system and posture, then A2Then it is expressed as position and appearance of second joint relative to first joint
State, and second joint can be made up of relative in the position of basis coordinates system and posture the matrix product shown in equation below (1):
T2=A1A2 (1)
By that analogy, n-th of joint is relative to the formula in the position of basis coordinates system and posture:
In formula,The posture in n-th of joint is represented,Represent position of n-th of joint relative to basis coordinates system;Wherein
AiIt can be represented according to D-H parameters:
θi,di,ai,αiFor joint of robot i D-H parameters;
Coordinate system during the robot cylinder model of present invention structure virtual scene using robot basis coordinates system as model, is asked
Solve position of each joint relative to basis coordinates systemConnecting rod between adjacent segment is modeled using cylinder,
The upper bottom surface center of circle of cylinder is the position of two joint point, and cylinder radius is adjusted according to actual conditions, structure 6 from
It is as shown in Figure 2 by the robot cylinder model spent.
2) RGB image of real scene and the skeleton data of operator are obtained in real time by Kinect, and builds virtual field
Operator's cylinder model of scape, it is as follows:
Operator is obtained very in real time when entering the operation interval of robot by being fixed on the Kinect before operator
The RGB image of real field scape and the skeleton data of operator, realize tracking and positioning to operator.Kinect has three kinds of shootings
Head:A kind of the RGB color camera for being used to gather coloured image and two kinds of infrared cameras for being used for sampling depth image.
Acquisition for real scene RGB image, Kinect is put into some position in the environment, opens coloured image
NUI_INITIALIZE_FLAG_USES_COLOR initializes Kinect, passes through the coloured image frame data of acquisition, uses
OpenCV is drawn out.
Acquisition for skeleton data, open skeleton data NUI_INITIALIZE_FLAG_USES_SKELETON and come just
Beginningization Kinect, when people is in standing state, Kinect can get the positions of 20 artis of people to represent people's
Skeleton.In the present invention, 15 artis are extracted and build the cylinder model of operator in virtual scene, this 15 artis from
Top to bottm and from left to right it is ordered as:1. head;2. shoulder center;3. right shoulder;4. right hand elbow;5. the right hand;6. left shoulder;7. left hand elbow;
8. left hand;9. hip joint center;10. right hip;Right knee;Right crus of diaphragm;Left hip;Left knee;Left foot;These joints
The position of point is relative to the position of Kinect coordinate systems.
Coordinate system of the present invention when building operator's cylinder model using Kinect coordinate systems as model, passes through Kinect pairs
The processing of depth image, obtain the position of 15 artis of operatorTo joint adjacent in human skeleton
Point is modeled using cylinder, and the upper bottom surface center of circle of cylinder is the position of two joint point, and cylinder radius is according to reality
Situation is adjusted, and operator's cylinder model of structure is as shown in Figure 3.
3) demarcation of robot and operator are corresponding in real scene and virtual scene, as follows:
In above-mentioned two step, the modeling to robot in virtual scene is the seat using robot basis coordinates system as model
Mark system, the modeling to operator in virtual scene is the coordinate system using Kinect coordinate systems as model, in order to by virtual scene and
Real scene carries out correspondingly, choosing a coordinate system in real scene, referred to as world coordinate system E1, and robot basis coordinates system claims
For coordinate system E2, Kinect coordinate systems are referred to as coordinate system E3.
Relation between robot basis coordinates system E2 and world coordinate system E1 can with spin matrix R and translation matrix T come
Represent, for example, coordinates of robot some artis P under robot basis coordinates system isIn world coordinates
System under coordinate beThen the relation between them is:
In formula,E2RE1For 3X3 matrixes, attitudes vibration matrix of the robot basis coordinates system relative to world coordinate system is represented;E2TE1For 3X1 matrixes, change in location matrix of the robot basis coordinates system relative to world coordinate system is represented;M2For 4X4 matrixes, table
Show pose transformation matrices of the robot basis coordinates system relative to world coordinate system.
Similarly, the relation between Kinect coordinate systems E3 and world coordinate system E1 can also use spin matrix R and translation square
Battle array T is represented, for example, coordinates of operator some artis P ' under Kinect coordinate systems isIn the world
Coordinate under coordinate system isThen the relation between them is:
After robot basis coordinates system and Kinect coordinate systems are transformed into world coordinate system, next just it is to determine true
The mapping relations of real field scape and virtual scene and weigh error between them.If for example, certain point P in virtual scenev
Coordinate beThen arbitrary point P in virtual scenevPoint P in corresponding real scenerScheme in real scene RGB
As in coordinate beA mapping f () then be present so that arbitrary point P in virtual scenevIt is and its corresponding true
Point P in scener, following relation be present:
Pr=f (Pv)+ε (6)
In formula, ε is error corresponding to virtual scene and real scene.
The bone three-dimensional data and coloured image 2-D data of operator is obtained for Kinect, Kinect SDK are provided
Mutual conversion between the two, that is, provide the mapping f of virtual scene operator and real scene operatorperson;
NuiTransformSkeletonToDepthImage provides three-dimensional skeleton data to the mapping relationship f of two-dimensional depth image1,
NuiImageGetColorPixelCoordinatesFromDepthPixel provides two-dimensional depth image to two-dimensional color figure
The mapping f of picture2, it is possible thereby to learn three-dimensional skeleton data to the mapping relations of Two-dimensional Color Image:
fperson=f2·f1 (7)
So far, the operator of virtual scene and the operator of real scene are had been achieved with correspondingly, for robot, due to machine
Device people basis coordinates system and Kinect coordinate systems have been transformed into world coordinate system, then robot and the virtual scene residing for operator
As being with the coordinate system of real scene, then the mapping relationship f of virtual scene robot and real scene robotrobotWith void
Intend the mapping relationship f of scene operation person and real scene operatorpersonIt is the same:
frobot=fperson (8)
Then for whole virtual scene and real scene, the formula of the mapping relationship f between them is:
F=frobot=fperson=f2·f1 (9)
With reference to formula above (6) and formula (9), the error ε of virtual scene and real scene can must be weighed:
ε=Pr-f2·f1(Pv) (10)
4) real time collision detection of robot model and operator's model is as follows:
When in the operating environment that operator enters robot, the information of operator has for robot can not be pre-
Intellectual, the efficiency and precision of collision detection can be directly influenced to the modeling strategy of operator and robot.In the present invention, it is right
Operator and machine are modeled using relatively regular geometry cylinder to robot and operator per capita, based on cylinder
The collision detection concrete scheme of body bounding box is as follows:
If A, B be a cylinder the round heart in upper bottom, C, D be another cylinder the round heart in upper bottom, space line lAB,
lCDFor the axis of two cylinders, the main situation for considering two straight line antarafacials, two different surface beeline l are asked forAB,lCDCommon vertical line and hang down
Sufficient P, Q, if P or Q carry out border detection, chosen near the end of other straight line not in line segment AB or line segment CD
Point substitutes P or Q points.
If P, Q in line segment AB and line segment CD, then two cylinders can only be that side is intersected, and now only need to judge line segment
The radius sum of PQ length and two cylinders.
If there is a point to fall in line segment in P, Q, a point falls on end points, then two cylinders can only side and end face phase
Hand over, now find on side near the bus of end face, judge whether bus intersects with end face.
If P, Q on the end points of line segment, then two cylinders can only be that end face is intersected, now only need to judge spatially two
Whether individual disc is intersecting.
Examples of implementation described above are only the preferred embodiments of the invention, and the implementation model of the present invention is not limited with this
Enclose, therefore the change that all shape, principles according to the present invention are made, it all should cover within the scope of the present invention.
Claims (1)
1. the intelligent robot under a kind of man-machine collaboration scene hides mankind's method, it is characterised in that comprises the following steps:
1) the robot cylinder model of virtual scene is built by robot D-H parameters, it is as follows:
Assuming that robot is made up of a series of connecting rods and joint with arbitrary form, what D-H parameters represented is joint of robot
Relation between connecting rod;In order to be modeled to robot, joint is modeled using point, cylinder is built to connecting rod
Mould;It is related to the positive kinematics of robot, by the initial angle in each joint of robot, to solve the coordinate system in each joint
With respect to the transformation relation of basis coordinates system;Each joint is given to define a right-handed coordinate system, the coordinate system in joint and next
Homogeneous transformation between the coordinate system in individual joint is called A matrixes, A1Represent first joint relative in the position of basis coordinates system
And posture, then A2Position and posture of second joint relative to first joint are then expressed as, and second joint is relative
It is made up of in the position of basis coordinates system and posture the matrix product shown in equation below (1):
T2=A1A2 (1)
By that analogy, n-th of joint is relative to the formula in the position of basis coordinates system and posture:
<mrow>
<msub>
<mi>T</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<msub>
<mi>A</mi>
<mn>1</mn>
</msub>
<msub>
<mi>A</mi>
<mn>2</mn>
</msub>
<mo>...</mo>
<msub>
<mi>A</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>q</mi>
<mi>n</mi>
<mrow>
<mn>3</mn>
<mo>&times;</mo>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
<mtd>
<msubsup>
<mi>p</mi>
<mi>n</mi>
<mrow>
<mn>3</mn>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula,The posture in n-th of joint is represented,Represent position of n-th of joint relative to basis coordinates system;Wherein AiRoot
Represented according to D-H parameters:
<mrow>
<msub>
<mi>A</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>a</mi>
<mi>i</mi>
</msub>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>a</mi>
<mi>i</mi>
</msub>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>sin</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<msub>
<mi>d</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
θi,di,ai,αiFor joint of robot i D-H parameters;
Coordinate system when building the robot cylinder model of virtual scene using robot basis coordinates system as model, solves each joint
Relative to the position of basis coordinates systemConnecting rod between adjacent segment is modeled using cylinder, cylinder it is upper
The bottom surface center of circle is the position of two joint point, and cylinder radius is adjusted according to actual conditions, builds the robot of 6DOF
Cylinder model;
2) RGB image of real scene and the skeleton data of operator are obtained in real time by Kinect, and builds virtual scene
Operator's cylinder model, it is as follows:
Operator obtains true field in real time when entering the operation interval of robot, by being fixed on the Kinect before operator
The RGB image of scape and the skeleton data of operator, realize tracking and positioning to operator;Kinect has three kinds of cameras:One
Kind is used for the RGB color camera for gathering coloured image and two kinds of infrared cameras for being used for sampling depth image;
Acquisition for real scene RGB image, Kinect is put into some position in the environment, opens coloured image NUI_
INITIALIZE_FLAG_USES_COLOR initializes Kinect, by the coloured image frame data of acquisition, is painted with OpenCV
Make and;
Acquisition for skeleton data, skeleton data NUI_INITIALIZE_FLAG_USES_SKELETON is opened to initialize
Kinect, when people is in standing state, Kinect can get the positions of 20 artis of people to represent the skeleton of people;
15 artis of extraction build the cylinder model of operator in virtual scene, and this 15 artis is from top to bottom and from left to right
It is ordered as:1. head;2. shoulder center;3. right shoulder;4. right hand elbow;5. the right hand;6. left shoulder;7. left hand elbow;8. left hand;9. in hip joint
The heart;10. right hip;Right knee;Right crus of diaphragm;Left hip;Left knee;Left foot;The position of these artis is relative to
The position of Kinect coordinate systems;
Coordinate system when building operator's cylinder model using Kinect coordinate systems as model, by Kinect to depth image
Processing, obtain the position of 15 artis of operatorCylinder is used to artis adjacent in human skeleton
Body is modeled, and the upper bottom surface center of circle of cylinder is the position of two joint point, and cylinder radius is adjusted according to actual conditions
It is whole;
3) demarcation of robot and operator are corresponding in real scene and virtual scene, as follows:
In above-mentioned two step, the modeling to robot in virtual scene is the coordinate using robot basis coordinates system as model
System, the modeling to operator in virtual scene is the coordinate system using Kinect coordinate systems as model, in order to by virtual scene and very
Real field scape carries out correspondingly, choosing a coordinate system in real scene, referred to as world coordinate system E1, and robot basis coordinates system is referred to as
Coordinate system E2, Kinect coordinate system is referred to as coordinate system E3;
Relation between robot basis coordinates system E2 and world coordinate system E1 is represented with spin matrix R and translation matrix T, if machine
Coordinates of device people some artis P under robot basis coordinates system beCoordinate under world coordinate system isThen the relation between them is:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>X</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Y</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Z</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mmultiscripts>
<mi>R</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</mmultiscripts>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mmultiscripts>
<mi>T</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</mmultiscripts>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>&CenterDot;</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>X</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Y</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Z</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<msub>
<mi>M</mi>
<mn>2</mn>
</msub>
<mo>&CenterDot;</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>X</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Y</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Z</mi>
<mi>p</mi>
<mrow>
<mi>E</mi>
<mn>2</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula,E2RE1For 3X3 matrixes, attitudes vibration matrix of the robot basis coordinates system relative to world coordinate system is represented;E2TE1For
3X1 matrixes, represent change in location matrix of the robot basis coordinates system relative to world coordinate system;M2For 4X4 matrixes, machine is represented
People's basis coordinates system relative to world coordinate system pose transformation matrices;
Similarly, the relation between Kinect coordinate systems E3 and world coordinate system E1 is also with spin matrix R and translation matrix T come table
Show, if coordinates of operator some artis P ' under Kinect coordinate systems isUnder world coordinate system
Coordinate isThen the relation between them is:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>X</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Y</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Z</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mmultiscripts>
<mi>R</mi>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</mmultiscripts>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mmultiscripts>
<mi>T</mi>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</mmultiscripts>
<mrow>
<mi>E</mi>
<mn>1</mn>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>&CenterDot;</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>X</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Y</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Z</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<msub>
<mi>M</mi>
<mn>3</mn>
</msub>
<mo>&CenterDot;</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>X</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Y</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>Z</mi>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>E</mi>
<mn>3</mn>
</mrow>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
After robot basis coordinates system and Kinect coordinate systems are transformed into world coordinate system, true field is next just to determine
The mapping relations of scape and virtual scene and weigh error between them;If the certain point P in virtual scenevCoordinate beThen arbitrary point P in virtual scenevPoint P in corresponding real scenerSeat in real scene RGB image
It is designated asA mapping f () then be present so that arbitrary point P in virtual scenevAnd its in corresponding real scene
Point Pr, following relation be present:
Pr=f (Pv)+ε (6)
In formula, ε is error corresponding to virtual scene and real scene;
The bone three-dimensional data and coloured image 2-D data of operator is obtained for Kinect, Kinect SDK provide both
Between mutual conversion, that is, provide the mapping f of virtual scene operator and real scene operatorperson;
NuiTransformSkeletonToDepthImage provides three-dimensional skeleton data to the mapping relationship f of two-dimensional depth image1,
NuiImageGetColorPixelCoordinatesFromDepthPixel provides two-dimensional depth image to two-dimensional color figure
The mapping f of picture2, it is possible thereby to learn three-dimensional skeleton data to the mapping relations of Two-dimensional Color Image:
fperson=f2·f1 (7)
So far, the operator of virtual scene and the operator of real scene are had been achieved with correspondingly, for robot, due to robot
Basis coordinates system and Kinect coordinate systems have been transformed into world coordinate system, then robot and virtual scene residing for operator and true
The coordinate system of real field scape is the same, the then mapping relationship f of virtual scene robot and real scene robotrobotWith virtual field
Scape operator and the mapping relationship f of real scene operatorpersonIt is the same:
frobot=fperson (8)
Then for whole virtual scene and real scene, the formula of the mapping relationship f between them is:
F=frobot=fperson=f2·f1 (9)
With reference to formula above (6) and formula (9), the error ε of virtual scene and real scene can must be weighed:
ε=Pr-f2·f1(Pv) (10)
4) real time collision detection of robot model and operator's model is as follows:
When in the operating environment that operator enters robot, the information of operator has unpredictable for robot
Property, the efficiency and precision of collision detection can be directly influenced to the modeling strategy of operator and robot;To operator and machine
Robot and operator are modeled using relatively regular geometry cylinder per capita, based on touching for cylinder bounding box
It is as follows to hit detection concrete scheme:
If A, B be a cylinder the round heart in upper bottom, C, D be another cylinder the round heart in upper bottom, space line lAB,lCDFor two
The axis of cylinder, the main situation for considering two straight line antarafacials, asks for two different surface beeline lAB,lCDCommon vertical line and intersection point P, Q,
If P or Q in line segment AB or line segment CD, do not carry out border detection, the end points chosen near other straight line substitutes P
Or Q points;
If P, Q in line segment AB and line segment CD, then two cylinders can only be that side is intersected, and now only need to judge line segment PQ's
The radius sum of length and two cylinders;
If there is a point to fall in line segment in P, Q, a point falls on end points, then two cylinders can only side and end face intersect,
Now find on side near the bus of end face, judge whether bus intersects with end face;
If P, Q on the end points of line segment, then two cylinders can only be that end face is intersected, now only need to judge spatially two circles
Whether face is intersecting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510518563.4A CN105137973B (en) | 2015-08-21 | 2015-08-21 | A kind of intelligent robot under man-machine collaboration scene hides mankind's method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510518563.4A CN105137973B (en) | 2015-08-21 | 2015-08-21 | A kind of intelligent robot under man-machine collaboration scene hides mankind's method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105137973A CN105137973A (en) | 2015-12-09 |
CN105137973B true CN105137973B (en) | 2017-12-01 |
Family
ID=54723348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510518563.4A Active CN105137973B (en) | 2015-08-21 | 2015-08-21 | A kind of intelligent robot under man-machine collaboration scene hides mankind's method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105137973B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109219856A (en) * | 2016-03-24 | 2019-01-15 | 宝利根 T·R 有限公司 | For the mankind and robot cooperated system and method |
CN106078752B (en) * | 2016-06-27 | 2019-03-19 | 西安电子科技大学 | A kind of anthropomorphic robot human body behavior imitation method based on Kinect |
EP3287861A1 (en) * | 2016-08-24 | 2018-02-28 | Siemens Aktiengesellschaft | Method for testing an autonomous system |
CN107564065B (en) * | 2017-09-22 | 2019-10-22 | 东南大学 | The measuring method of man-machine minimum range under a kind of Collaborative environment |
CN108427331A (en) * | 2018-03-30 | 2018-08-21 | 烟台维度机器人有限公司 | A kind of man-machine collaboration safety protecting method and system |
CN108527370B (en) * | 2018-04-16 | 2020-06-02 | 北京卫星环境工程研究所 | Human-computer co-fusion safety protection control system based on vision |
CN108846891B (en) * | 2018-05-30 | 2023-04-28 | 广东省智能制造研究所 | Man-machine safety cooperation method based on three-dimensional skeleton detection |
CN111640176A (en) * | 2018-06-21 | 2020-09-08 | 华为技术有限公司 | Object modeling movement method, device and equipment |
CN109116992B (en) * | 2018-08-31 | 2020-12-04 | 北京航空航天大学 | Collision response system for virtual hand force feedback interaction |
CN109465835A (en) * | 2018-09-25 | 2019-03-15 | 华中科技大学 | The safety predicting method in advance of both arms service robot operation under a kind of dynamic environment |
CN109434830A (en) * | 2018-11-07 | 2019-03-08 | 宁波赛朗科技有限公司 | A kind of industrial robot platform of multi-modal monitoring |
CN109500811A (en) * | 2018-11-13 | 2019-03-22 | 华南理工大学 | A method of the mankind are actively avoided towards man-machine co-melting robot |
CN110175523B (en) * | 2019-04-26 | 2021-05-14 | 南京华捷艾米软件科技有限公司 | Self-moving robot animal identification and avoidance method and storage medium thereof |
CN110487263A (en) * | 2019-08-15 | 2019-11-22 | 北京致行慕远科技有限公司 | The coordinate acquiring method and device of movable equipment |
CN111702293A (en) * | 2020-06-10 | 2020-09-25 | 南京英尼格玛工业自动化技术有限公司 | Automatic welding gun track avoiding method for high-speed rail sleeper beam process hole |
CN111735601B (en) * | 2020-08-04 | 2021-03-30 | 中国空气动力研究与发展中心低速空气动力研究所 | Wall collision prevention method for double-engine refueling wind tunnel test supporting device |
CN113370210A (en) * | 2021-06-23 | 2021-09-10 | 华北科技学院(中国煤矿安全技术培训中心) | Robot active collision avoidance system and method |
CN113733098B (en) * | 2021-09-28 | 2023-03-03 | 武汉联影智融医疗科技有限公司 | Mechanical arm model pose calculation method and device, electronic equipment and storage medium |
CN114872043B (en) * | 2022-05-09 | 2023-11-17 | 苏州艾利特机器人有限公司 | Robot collision detection method, storage medium and electronic equipment |
CN115496798B (en) * | 2022-11-08 | 2023-03-24 | 中国电子科技集团公司第三十八研究所 | Co-location method and system for tethered balloon equipment simulation training |
CN115890671B (en) * | 2022-11-17 | 2024-09-10 | 山东大学 | Multi-geometry human body collision model generation method and system based on SMPL parameters |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103170973A (en) * | 2013-03-28 | 2013-06-26 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
CN103226387A (en) * | 2013-04-07 | 2013-07-31 | 华南理工大学 | Video fingertip positioning method based on Kinect |
CN103399637A (en) * | 2013-07-31 | 2013-11-20 | 西北师范大学 | Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect |
CN104570731A (en) * | 2014-12-04 | 2015-04-29 | 重庆邮电大学 | Uncalibrated human-computer interaction control system and method based on Kinect |
CN104777775A (en) * | 2015-03-25 | 2015-07-15 | 北京工业大学 | Two-wheeled self-balancing robot control method based on Kinect device |
-
2015
- 2015-08-21 CN CN201510518563.4A patent/CN105137973B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103170973A (en) * | 2013-03-28 | 2013-06-26 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
CN103226387A (en) * | 2013-04-07 | 2013-07-31 | 华南理工大学 | Video fingertip positioning method based on Kinect |
CN103399637A (en) * | 2013-07-31 | 2013-11-20 | 西北师范大学 | Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect |
CN104570731A (en) * | 2014-12-04 | 2015-04-29 | 重庆邮电大学 | Uncalibrated human-computer interaction control system and method based on Kinect |
CN104777775A (en) * | 2015-03-25 | 2015-07-15 | 北京工业大学 | Two-wheeled self-balancing robot control method based on Kinect device |
Non-Patent Citations (3)
Title |
---|
D_H法建立连杆坐标系存在的问题及改进;郭发勇等;《中国机械工程》;20141031;第25卷(第20期);第2710-2714页 * |
基于给定工作空间的6R型机器人D_H参数优化设计;甘屹等;《中国机械工程》;20141130;第25卷(第22期);第3303-3007、3011页 * |
采用Kinect的移动机器人目标跟踪与避障;贺超等;《智能系统学报》;20131031;第8卷(第5期);第426-432页 * |
Also Published As
Publication number | Publication date |
---|---|
CN105137973A (en) | 2015-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105137973B (en) | A kind of intelligent robot under man-machine collaboration scene hides mankind's method | |
CN110480634B (en) | Arm guide motion control method for mechanical arm motion control | |
CN108762495B (en) | Virtual reality driving method based on arm motion capture and virtual reality system | |
CN103170973B (en) | Man-machine cooperation device and method based on Kinect video camera | |
CN102638653B (en) | Automatic face tracing method on basis of Kinect | |
CN103271784B (en) | Man-machine interactive manipulator control system and method based on binocular vision | |
CN102982557B (en) | Method for processing space hand signal gesture command based on depth camera | |
CN111694428B (en) | Gesture and track remote control robot system based on Kinect | |
CN104570731A (en) | Uncalibrated human-computer interaction control system and method based on Kinect | |
CN107030692B (en) | Manipulator teleoperation method and system based on perception enhancement | |
CN104589356A (en) | Dexterous hand teleoperation control method based on Kinect human hand motion capturing | |
CN109816730A (en) | Workpiece grabbing method, apparatus, computer equipment and storage medium | |
CN107662195A (en) | A kind of mechanical hand principal and subordinate isomery remote operating control system and control method with telepresenc | |
CN106131488B (en) | A kind of augmented reality method based on unmanned plane | |
CN111571582B (en) | Moxibustion robot man-machine safety monitoring system and monitoring method | |
CN106625658A (en) | Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time | |
WO2022227664A1 (en) | Robot posture control method, robot, storage medium and computer program | |
CN109968310A (en) | A kind of mechanical arm interaction control method and system | |
CN107564065A (en) | The measuring method of man-machine minimum range under a kind of Collaborative environment | |
CN110142769A (en) | The online mechanical arm teaching system of ROS platform based on human body attitude identification | |
CN108621164A (en) | Taiji push hands machine people based on depth camera | |
CN109015631A (en) | The method that anthropomorphic robot based on more working chains imitates human motion in real time | |
CN108961393A (en) | A kind of human body modeling method and device based on point cloud data stream | |
CN109214295B (en) | Gesture recognition method based on data fusion of Kinect v2 and Leap Motion | |
CN115810188A (en) | Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |