CN107030692B - Manipulator teleoperation method and system based on perception enhancement - Google Patents
Manipulator teleoperation method and system based on perception enhancement Download PDFInfo
- Publication number
- CN107030692B CN107030692B CN201710192822.8A CN201710192822A CN107030692B CN 107030692 B CN107030692 B CN 107030692B CN 201710192822 A CN201710192822 A CN 201710192822A CN 107030692 B CN107030692 B CN 107030692B
- Authority
- CN
- China
- Prior art keywords
- manipulator
- position information
- image
- hand
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000008447 perception Effects 0.000 title claims abstract description 18
- 230000033001 locomotion Effects 0.000 claims abstract description 35
- 241000282414 Homo sapiens Species 0.000 claims abstract description 31
- 238000013507 mapping Methods 0.000 claims abstract description 21
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 230000003578 releasing effect Effects 0.000 claims description 12
- 230000003068 static effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 8
- 210000000078 claw Anatomy 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/085—Force or torque sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1671—Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40315—Simulation with boundary graphs
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Manipulator (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a manipulator teleoperation method and a manipulator teleoperation system based on perception enhancement, belonging to the technical field of manipulator control; the method comprises the following steps: identifying the position information of fingertips and palms of the human hand in the motion process; mapping the position information into a manipulator operation instruction; receiving pressure data of a pressure sensor on the manipulator, and controlling the vibration intensity of a vibrator at the hand of a person according to the pressure data; and receiving image information of the manipulator workbench acquired by more than two depth cameras arranged around the manipulator, and constructing a 3D real-time operation scene. Through the mode of degree of depth camera and hand mark, simple and fast acquires the hand motion data to the perception of teleoperation process is strengthened through pressure feedback and visual feedback, makes operating personnel obtain more information at remote control in-process, the help optimizes remote operation effect.
Description
Technical Field
The invention relates to the technical field of manipulator control, in particular to a manipulator teleoperation method and system based on perception enhancement.
Background
In recent years, with the development of manipulator control technology, especially the manipulator control technology for simulating the actions of human hands by collecting human hand action data, the manipulator has a great application prospect in various industries. In the control technology, the hand motion data is usually collected through wearable devices such as an exoskeleton and a data glove, and the collection of the wearable devices is based on sensors on the wearable devices, so that the wearable devices are high in selling price, difficult to be put into practical use and especially difficult to be put into civilization.
In addition, with the rapid development of robotics, especially civil service robots and humanoid robotics, there is an increasing demand for low-cost manipulation techniques, especially teleoperation techniques that can remotely control a manipulator.
Disclosure of Invention
The invention aims to provide a manipulator teleoperation method and system based on perception enhancement, which can be used for teleoperation of a manipulator at low cost.
In order to achieve the above object, the present invention provides a manipulator teleoperation method based on perception enhancement, comprising a motion recognition step, a motion mapping step, a pressure feedback step and a visual feedback step; the motion recognition step comprises recognizing the position information of fingertips and palms of a human hand in the motion process; the action mapping step comprises mapping the position information into manipulator operation instructions; the pressure feedback step comprises the steps of receiving pressure data of a pressure sensor on the manipulator and controlling the vibration intensity of a vibrator at the hand of a person according to the pressure data; the visual feedback step comprises the steps of receiving image information of the manipulator workbench acquired by more than two depth cameras arranged around the manipulator and constructing a 3D real-time operation scene.
In the working process, the position information of the fingertips and the palm centers of the human hands in the motion process is identified through the identification step, and because the data of the motion is realized by collecting the position information of the fingertips and the palm centers, low-cost collection equipment such as Kinect and the like can be used, so that the cost of the manipulator teleoperation technology is reduced; through the action mapping step, the action data are processed and converted into corresponding manipulator operation instructions, so that the mapping from the human action data to the manipulator operation instructions is realized, and the manipulator can be remotely controlled; through the pressure feedback step, the pressure information of the manipulator in the process of simulating the grabbing of the hand is fed back to the hand through the vibration intensity, so that the pressure of the manipulator is sensed by human beings; through the visual feedback step, a 3D real-time scene operated by the manipulator is constructed by using image information acquired by the two depth cameras and fed back to an operator, so that the manipulator remote control operation with enhanced perception is realized.
The specific scheme is that the step of identifying the position information of fingertips and palms of a human hand in the motion process comprises the following steps: receiving color images, depth images and skeleton data of the human hand obtained by the Kinect in the motion process; acquiring the position information of the palm of the hand according to the skeleton data; and identifying the color marks arranged on the finger tips of the person according to the depth image, the color image and the position information of the palm center, and acquiring the position information of the finger tips of the person, wherein the position information comprises three-dimensional coordinate information of the finger tips. And arranging color marks such as a color marking paper tape on the tip of a human finger, and acquiring three-dimensional coordinate data of the fingertip according to Kinect depth information and image information.
Another specific solution is that the step of mapping the position information into the manipulator operation instruction comprises: and extracting semantic information according to the position information, and mapping the semantic information into a manipulator operation instruction.
More specifically, the step of extracting semantic information comprises the following steps: according to the continuously acquired position information, the hand movement is classified into four semantics of moving, static, grabbing and releasing. The mapping algorithm for mapping the hand position information to the manipulator operation instruction is effectively simplified.
Another specific scheme is that the step of receiving image information of a manipulator workbench acquired by more than two depth cameras arranged around the manipulator and constructing a 3D real-time operation scene comprises: the depth camera is Kinect; 3D point cloud data in different directions are constructed according to the depth image and the color image acquired by the Kinect; identifying marks arranged on a manipulator workbench according to the depth image and the color image acquired by the Kinect; scaling through the marks, and fusing the 3D point cloud data in different directions into a 3D image model; the 3D image model is optimized by the ICP algorithm. The real-time 3D operation scene is observed through screen or VR glasses to the operating personnel of being convenient for to realize perception enhancement.
In order to achieve the above another object, the present invention provides a manipulator teleoperation system based on perception enhancement, comprising a motion recognition unit, a visual feedback unit, a pressure feedback unit and a control unit; the motion recognition unit comprises a Kinect used for acquiring the position information of a human hand, and the visual feedback unit comprises more than two depth cameras for acquiring the image information of the manipulator workbench; the pressure feedback unit comprises a pressure sensor for acquiring the pressure of the mechanical hand and a vibrator arranged on the hand, and the control unit comprises a processor in communication connection with the Kinect, the depth image camera, the pressure sensor and the vibrator. Through the cooperation of the manipulator teleoperation system and the manipulator, the manipulator can be remotely controlled at a lower cost.
The specific scheme is that the processor is used for: receiving color images, depth images and skeleton data of the human hand obtained by the Kinect in the motion process; acquiring the position information of the palm of the human hand according to the skeleton data; identifying a color mark arranged on the finger tip of a person according to the depth image, the color image and the position information of the palm center, and acquiring the position information of the finger tip of the person, wherein the position information comprises three-dimensional coordinate information of the finger tip; according to the continuously acquired position information of the palm center and the finger tip of the human hand, the human hand action is classified into four semantics of moving, standing, grabbing and releasing, and the semantic information extracted from the position information is mapped into a manipulator operation instruction; receiving pressure data of a pressure sensor and controlling the vibration intensity of a vibrator according to the pressure data; and receiving image information acquired by the depth camera, and constructing a 3D real-time operation scene.
The manipulator teleoperation method and system based on perception enhancement utilize an image technology, a sensing technology, a pressure-vibration feedback technology and a 3D vision feedback technology, control the manipulator to simulate the motion of the manipulator to operate by converting the position information of the manipulator into an operation instruction, and feed back the pressure and vision of the manipulator in the operation process to an operator, so that the manipulator is conveniently remotely controlled by the operator, and the implementation cost is effectively reduced compared with the prior art adopting wearable acquisition equipment.
Drawings
FIG. 1 is a block diagram of a manipulator teleoperation system based on perception enhancement according to the present invention;
fig. 2 is a work flow diagram of a manipulator teleoperation method based on perception enhancement according to the invention.
Detailed Description
The invention is further illustrated by the following examples and figures.
Examples
Referring to fig. 1, the manipulator teleoperation system 1 based on perception enhancement of the present invention includes a manipulator 10, a control unit 11, a motion recognition unit 12, a pressure feedback unit 13, and a visual feedback unit 14.
The robot 10 is composed of a robot arm of an EPSON C4 model and a gripper of a ROBOTIQ three-finger gripper.
The motion recognition unit 12 includes a Kinect disposed in front of the user for acquiring information on the position of the hand of the operator during the operation.
The pressure feedback unit 13 includes a pressure sensor disposed on the gripper to detect the pressure applied to the gripper during the instruction execution process, and a vibrator disposed on the human hand to feed back the pressure according to the vibration intensity of the gripper, in this embodiment, the pressure sensor is a FlexiForce pressure sensor, and the vibrator is a 1027 flat motor.
The visual feedback unit 14 includes more than two depth cameras arranged in different directions of the remote manipulator working platform for acquiring parallax images to construct a 3D real-time operation scene, and a display, a screen or VR glasses for displaying the 3D real-time operation scene, in this embodiment, the depth cameras are Kinect.
The control unit 11 includes a processor in communication connection with the pressure sensor, the Kinect, the vibrator, the manipulator and the 3D real-time operation scene display device, in this embodiment, the communication connection is data transmission through a communication line, and the communication line includes one or more data lines configured between the processor and the pressure sensor, the Kinect, the vibrator, the manipulator and the 3D real-time operation scene display device for data information transmission, including but not limited to an electric line, an optical line, a wireless line and a combination of the two.
Referring to fig. 2, the manipulator teleoperation method based on perception enhancement of the present invention includes a motion recognition step S1, a motion mapping step S2, a pressure feedback step S3, and a visual feedback step S4.
And a motion recognition step S1, recognizing the position information of the finger tip and the palm center of the human hand in the motion process.
(1) Three kinds of data acquired by Kinect are received: receiving bone data, a depth image and a color image of the hand of an operator in the movement process, which are obtained by the Kinect, returning to obtain again if any one of the three data fails to obtain, and selecting the bone data of the person closest to the Kinect;
(2) acquiring a defined hand area: mapping coordinates into depth values and color image pixel coordinates according to coordinates of a right hand in the skeleton data, selecting a depth threshold and an image width threshold, and selecting a rectangular range containing the whole hand in a color image;
(3) and (3) carrying out color recognition: processing the RGB image in the rectangular range into a YUV model image, finding out color coordinates meeting the threshold condition through marking a YUV threshold and a depth threshold of the color, converting the color coordinates into a binary gray image, and returning to the step (1) if no color is marked;
(4) performing mathematical morphology processing on the binary image: firstly, carrying out corrosion operation to remove noise data, then carrying out expansion operation to restore the image, and finally carrying out closing operation to carry out edge improvement processing on the image;
(5) acquiring fingertip coordinates: finding out the largest connected region in the binary image, calculating the center position, converting the center position into a three-dimensional coordinate of Kinect, namely a fingertip coordinate, and converting the fingertip coordinate into an inter-finger distance. The interphalangeal distance and the hand coordinates of the skeletal data are transmitted to the remote end.
The motion mapping step S2 maps the position information to a manipulator operation command.
The action mapping is to further process the collected data and extract semantics, namely to judge the operation semantics corresponding to the current position information according to the current position information and the interrelation between a series of previously acquired position information, wherein the operation semantics comprise static and moving, grabbing and releasing. When a human hand moves, the palm center coordinate has larger deviation with the Euclidean distance of the previous palm center coordinate, and due to data noise, the Euclidean distance between the current palm center coordinate and the previous palm center coordinate also fluctuates when the human hand is static, but the fluctuation range is in a smaller range, a threshold value can be set, when the fluctuation is larger than the threshold value, the human hand moves, and when the fluctuation is smaller than the threshold value, the human hand is static. The grabbing and releasing are carried out in the same way, and grabbing and releasing behaviors are judged by setting a grabbing and releasing inter-finger distance threshold. And then, converting the semantics into corresponding manipulator operation instructions, firstly acquiring a plurality of groups of motion and static, grabbing and releasing action data, finding out action threshold values, converting the action threshold values into instructions through a mapping algorithm, and sending the instructions to the manipulator. The mapping algorithm is as follows:
(1) initializing a mechanical arm: respectively sending initialization commands to the mechanical arm and the mechanical claw to enable the mechanical arm and the mechanical claw to execute initialization operation;
(2) acquiring inter-finger distance, further processing the fingertip distance, and converting the fingertip distance into a mechanical claw grabbing amplitude parameter;
(3) acquiring the current coordinate of the mechanical arm, judging whether the mapping is the first mapping, if so, recording the current action data as initial position data, enabling the mechanical arm to execute the specified initial position operation, calibrating, and returning to the step (2); if not, continuing to execute the step (4);
(4) judging whether the last operation is finished, namely judging whether the current mechanical arm coordinate is the target coordinate of the last operation; if not, returning to the step (2), and if so, executing the step (5);
(5) judging whether the grabbing and releasing operation is performed, if so, executing the grabbing and releasing operation, controlling the mechanical claw to operate, not operating the mechanical arm, and returning to the step (2); if not, judging whether the current hand is static (the turning point of the hand motion is sent to the mechanical arm, and the turning point is static), if not, returning to the step (2), and if so, executing the step (6);
(6) and (4) coordinate conversion, namely acquiring relative coordinates of the current position and the initial position according to the initial position set in the step (3), converting the coordinates into coordinates in a mechanical arm coordinate system by combining the set mechanical arm initial position coordinates, executing the moving operation, and returning to the step (2) to continue executing.
And a pressure feedback step S3, receiving pressure data of a pressure sensor on the manipulator, and controlling the vibration intensity of the vibrator at the hand of the person according to the pressure data.
The pressure feedback is to feed back the pressure information in the grabbing process of the manipulator to an operator, so that the robot can be better helped to remotely control the grabbing action. Arranging a pressure sensor at the fingertip of the mechanical claw, and acquiring the touch pressure information of the mechanical hand end through the pressure sensor; for the feedback of the tactile pressure information, a vibration sensor is worn on the hand of the operator. The sensor is connected to two Arduino singlechips respectively, and bluetooth module is being connected to the Arduino singlechip, and data transmission sends through the bluetooth. The pressure data that gathers divide into 5 grade intensity, send grade intensity data for the receiving terminal, the receiving terminal sets up the voltage value of Arduino simulation port according to grade intensity data, and the vibration of different intensity takes place for the control vibration sensor, realizes pressure feedback.
And a visual feedback step S4 of receiving image information of the manipulator workbench acquired by more than two depth cameras arranged around the manipulator and constructing a 3D real-time operation scene.
The visual feedback is to feed back a 3D real-time operation scene in the motion process of the manipulator to an operator so as to help observe the remote control manipulator. The method comprises the steps of arranging more than two Kinects in different directions of a manipulator working platform, firstly calibrating, identifying marks in Kinect image scenes, obtaining coordinate positions marked in a Kinect coordinate system, converting coordinates of all points in the Kinect coordinate system into coordinates of the marked coordinate system according to the relative positions of the Kinect and the marks, converting the coordinates into global coordinates according to the preset relative positions marked in the global coordinate system, realizing the fusion of point cloud data of a plurality of Kinects, constructing a real-time 3D scene, continuously iterating and calculating a rotation and translation transformation matrix through an ICP algorithm to enable the distance between homologous points of the point cloud data of different Kinects to be minimum, optimizing the constructed 3D scene, limiting the iteration times in order to guarantee the feedback instantaneity, and returning a suboptimal result when the iteration times are too high. The Kinect original data acquired by the manipulator end are directly sent to the remote end, the remote end further processes the original data into point cloud data, and the problem that the point cloud data are directly sent, the data size is large, and the real-time performance of visual feedback is affected is avoided.
In use, the processor in the control unit 11 is configured to:
(1) receiving color images, depth images and skeleton data of the human hand obtained by the Kinect in the motion process; acquiring the position information of the palm of the hand according to the skeleton data; identifying a color mark arranged on the finger tip of a person according to the depth image, the color image and the position information of the palm center, and acquiring the position information of the finger tip of the person, wherein the position information comprises three-dimensional coordinate information of the finger tip; thereby recognizing the position information of the fingertips and the palm centers of the human hands in the motion process.
(2) According to the continuously acquired position information of the palm center and the finger tip of the human hand, the motion of the human hand is classified into four semantics of moving, standing, grabbing and releasing, the semantic information extracted from the position information is mapped into a manipulator operation instruction, and then the operation instruction is respectively sent to the mechanical claw and the mechanical arm.
(3) The pressure data of the pressure sensor is received, the pressure data is divided into 5 grades, and the vibration intensity of the vibrator is controlled according to the pressure grade data.
(4) 3D point cloud data in different directions are constructed according to the depth image and the color image acquired by the Kinect; identifying marks arranged on a manipulator workbench according to the depth image and the color image acquired by the Kinect; scaling through the marks, and fusing the 3D point cloud data in different directions into a 3D image model; the 3D image model is optimized by the ICP algorithm.
The specific operation process is described in the above method steps, and is not described herein again.
According to the invention, data of a depth camera is adopted, in the process of hand motion, the data is converted into semantic information through a mapping algorithm, and then further converted into a corresponding manipulator operation instruction to be sent to a remote manipulator, and the remote manipulator sends pressure data and a 3D real-time scene back to an operator in real time through a pressure sensor arranged at the fingertip of the manipulator and a plurality of Kinects arranged in different directions of the manipulator, so that the perception enhancement remote control is realized.
Claims (2)
1. A manipulator teleoperation method based on perception enhancement is characterized in that:
an action identification step, namely identifying the position information of fingertips and palms of the human hand in the motion process; specifically, receiving a color image, a depth image and skeleton data of a human hand obtained by the Kinect in the motion process; acquiring the position information of the palm of the hand according to the skeleton data; identifying a color mark arranged on the finger tip of a person according to the depth image, the color image and the position information of the palm center, and acquiring the position information of the finger tip of the person, wherein the position information comprises three-dimensional coordinate information of the finger tip;
an action mapping step of mapping the position information into a manipulator operation instruction; specifically, according to continuously acquired position information, the hand movement is classified into four semantics of moving, static, grabbing and releasing, and then semantic information is mapped into a manipulator operation instruction;
a pressure feedback step, namely receiving pressure data of a pressure sensor on the manipulator and controlling the vibration intensity of a vibrator at the hand of a person according to the pressure data; wherein the pressure sensor is arranged at a fingertip of the gripper;
a visual feedback step, namely receiving image information of a manipulator workbench acquired by more than two depth cameras arranged around the manipulator, and constructing a 3D real-time operation scene; specifically, the depth camera is a Kinect; 3D point cloud data in different directions are constructed according to the depth image and the color image acquired by the Kinect; identifying marks arranged on a manipulator workbench according to the depth image and the color image acquired by the Kinect; scaling through the marks, and fusing the 3D point cloud data in different directions into a 3D image model; the 3D image model is optimized by the ICP algorithm.
2. A manipulator teleoperation system based on perception enhancement, comprising:
the motion recognition unit comprises a Kinect for acquiring the position information of the human hand;
the visual feedback unit comprises more than two depth cameras for acquiring the image information of the manipulator workbench;
the pressure feedback unit comprises a pressure sensor for acquiring the pressure of the mechanical hand and a vibrator arranged on the hand of a person; wherein the pressure sensor is arranged at a fingertip of the gripper;
the control unit comprises a processor which is in communication connection with the Kinect, the depth camera, the pressure sensor and the vibrator;
the processor is configured to:
acquiring the position information of the palm of the human hand according to the skeleton data;
identifying a color mark arranged on the finger tip of a person according to the depth image, the color image and the position information of the palm center, and acquiring the position information of the finger tip of the person, wherein the position information comprises three-dimensional coordinate information of the finger tip;
according to the continuously acquired position information of the palm center and the finger tip of the human hand, the human hand action is classified into four semantics of moving, standing, grabbing and releasing, and the semantic information extracted from the position information is mapped into a manipulator operation instruction;
receiving pressure data of a pressure sensor and controlling the vibration intensity of a vibrator according to the pressure data;
and receiving image information acquired by the depth camera, and constructing a 3D real-time operation scene.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710192822.8A CN107030692B (en) | 2017-03-28 | 2017-03-28 | Manipulator teleoperation method and system based on perception enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710192822.8A CN107030692B (en) | 2017-03-28 | 2017-03-28 | Manipulator teleoperation method and system based on perception enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107030692A CN107030692A (en) | 2017-08-11 |
CN107030692B true CN107030692B (en) | 2020-01-07 |
Family
ID=59533742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710192822.8A Active CN107030692B (en) | 2017-03-28 | 2017-03-28 | Manipulator teleoperation method and system based on perception enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107030692B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107639620A (en) * | 2017-09-29 | 2018-01-30 | 西安交通大学 | A kind of control method of robot, body feeling interaction device and robot |
CN107961078B (en) * | 2017-12-18 | 2019-12-24 | 微创(上海)医疗机器人有限公司 | Surgical robot system and surgical instrument thereof |
CN107953338B (en) * | 2017-12-29 | 2023-04-11 | 深圳市越疆科技有限公司 | Method and device for sorting articles by robot and mechanical arm |
CN108748139A (en) * | 2018-04-18 | 2018-11-06 | 四川文理学院 | Robot control method based on human body temperature type and device |
CN109318204A (en) * | 2018-10-24 | 2019-02-12 | 国网江苏省电力有限公司徐州供电分公司 | A kind of livewire work tow-armed robot intelligence control system |
CN109483538A (en) * | 2018-11-16 | 2019-03-19 | 左志强 | A kind of VR movement projection robot system based on Kinect technology |
CN110815258B (en) * | 2019-10-30 | 2023-03-31 | 华南理工大学 | Robot teleoperation system and method based on electromagnetic force feedback and augmented reality |
CN110853099B (en) * | 2019-11-19 | 2023-04-14 | 福州大学 | Man-machine interaction method and system based on double Kinect cameras |
CN111160088A (en) * | 2019-11-22 | 2020-05-15 | 深圳壹账通智能科技有限公司 | VR (virtual reality) somatosensory data detection method and device, computer equipment and storage medium |
CN115741701A (en) * | 2022-11-22 | 2023-03-07 | 上海智能制造功能平台有限公司 | Force and position hybrid robot track and action guiding system and method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102814814B (en) * | 2012-07-31 | 2015-07-01 | 华南理工大学 | Kinect-based man-machine interaction method for two-arm robot |
CN103302668B (en) * | 2013-05-22 | 2016-03-16 | 东南大学 | Based on control system and the method thereof of the Space teleoperation robot of Kinect |
CN104108097A (en) * | 2014-06-25 | 2014-10-22 | 陕西高华知本化工科技有限公司 | Feeding and discharging mechanical arm system based on gesture control |
CN104866097B (en) * | 2015-05-22 | 2017-10-24 | 厦门日辰科技有限公司 | The method of hand-held signal output apparatus and hand-held device output signal |
CN204740560U (en) * | 2015-05-22 | 2015-11-04 | 厦门日辰科技有限公司 | Handheld signal output device |
CN105877846B (en) * | 2016-03-30 | 2018-05-08 | 佳木斯大学 | Oral cavity diagnosis robot system and its control method |
-
2017
- 2017-03-28 CN CN201710192822.8A patent/CN107030692B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107030692A (en) | 2017-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107030692B (en) | Manipulator teleoperation method and system based on perception enhancement | |
US20210205986A1 (en) | Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose | |
CN110026987B (en) | Method, device and equipment for generating grabbing track of mechanical arm and storage medium | |
US10384348B2 (en) | Robot apparatus, method for controlling the same, and computer program | |
CN110385694B (en) | Robot motion teaching device, robot system, and robot control device | |
Lambrecht et al. | Spatial programming for industrial robots based on gestures and augmented reality | |
WO2011065035A1 (en) | Method of creating teaching data for robot, and teaching system for robot | |
WO2011065034A1 (en) | Method for controlling action of robot, and robot system | |
CN102350700A (en) | Method for controlling robot based on visual sense | |
CN107662195A (en) | A kind of mechanical hand principal and subordinate isomery remote operating control system and control method with telepresenc | |
CN113103230A (en) | Human-computer interaction system and method based on remote operation of treatment robot | |
CN110549353B (en) | Force vision device, robot, and computer-readable medium storing force vision program | |
CN102830798A (en) | Mark-free hand tracking method of single-arm robot based on Kinect | |
US11422625B2 (en) | Proxy controller suit with optional dual range kinematics | |
CN110603122A (en) | Automated personalized feedback for interactive learning applications | |
CN108828996A (en) | A kind of the mechanical arm remote control system and method for view-based access control model information | |
CN104656893A (en) | Remote interaction control system and method for physical information space | |
CN105500370A (en) | Robot offline teaching programming system and method based on somatosensory technology | |
Bhattacharjee et al. | Combining tactile sensing and vision for rapid haptic mapping | |
Lin et al. | The implementation of augmented reality in a robotic teleoperation system | |
CN109934155B (en) | Depth vision-based collaborative robot gesture recognition method and device | |
Wu et al. | Kinect-based robotic manipulation: From human hand to end-effector | |
CN211890823U (en) | Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera | |
KR20230100101A (en) | Robot control system and method for robot setting and robot control using the same | |
Infantino et al. | Visual control of a robotic hand |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |