Nothing Special   »   [go: up one dir, main page]

CN114344093A - Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning - Google Patents

Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning Download PDF

Info

Publication number
CN114344093A
CN114344093A CN202111506488.1A CN202111506488A CN114344093A CN 114344093 A CN114344093 A CN 114344093A CN 202111506488 A CN202111506488 A CN 202111506488A CN 114344093 A CN114344093 A CN 114344093A
Authority
CN
China
Prior art keywords
rehabilitation robot
lower limb
patient
limb rehabilitation
reinforcement learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111506488.1A
Other languages
Chinese (zh)
Other versions
CN114344093B (en
Inventor
罗朝晖
张笑千
尚鹏
王博
杨德龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202111506488.1A priority Critical patent/CN114344093B/en
Publication of CN114344093A publication Critical patent/CN114344093A/en
Application granted granted Critical
Publication of CN114344093B publication Critical patent/CN114344093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1036Measuring load distribution, e.g. podologic studies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1036Measuring load distribution, e.g. podologic studies
    • A61B5/1038Measuring plantar pressure during gait
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6823Trunk, e.g., chest, back, abdomen, hip
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6829Foot or ankle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B21/00Exercising apparatus for developing or strengthening the muscles or joints of the body by working against a counterforce, with or without measuring devices
    • A63B21/00181Exercising apparatus for developing or strengthening the muscles or joints of the body by working against a counterforce, with or without measuring devices comprising additional means assisting the user to overcome part of the resisting force, i.e. assisted-active exercising
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/04Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for lower limbs
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/04Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for lower limbs
    • A63B23/0405Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for lower limbs involving a bending of the knee and hip joints simultaneously
    • A63B23/0464Walk exercisers without moving parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0087Electric or electronic controls for exercising apparatus of groups A63B21/00 - A63B23/00, e.g. controlling load
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5069Angle sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5071Pressure sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5084Acceleration sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/10Leg
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0087Electric or electronic controls for exercising apparatus of groups A63B21/00 - A63B23/00, e.g. controlling load
    • A63B2024/0093Electric or electronic controls for exercising apparatus of groups A63B21/00 - A63B23/00, e.g. controlling load the load of the exercise apparatus being controlled by performance parameters, e.g. distance or speed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/18Inclination, slope or curvature
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/40Acceleration
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/40Acceleration
    • A63B2220/44Angular acceleration
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/50Force related parameters
    • A63B2220/56Pressure
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/803Motion sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/83Special sensors, transducers or devices therefor characterised by the position of the sensor
    • A63B2220/836Sensors arranged on the body of the user

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Rehabilitation Therapy (AREA)
  • Physiology (AREA)
  • Pain & Pain Management (AREA)
  • Epidemiology (AREA)
  • Geometry (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention discloses a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning, which specifically comprises the following processes: gather the patient respectively when using low limbs rehabilitation robot to carry out the rehabilitation training, low limbs joint motion information, plantar pressure distribution information and waistband pressure information are as the state value, help capable speed size and direction as the action value with the recovery of low limbs rehabilitation robot to patient, the waistband of patient waist and the mutual pressure of low limbs rehabilitation robot lumbar support are reward punishment value, the model is reinforceed to the construction degree of depth, a recovered action study when using low limbs rehabilitation robot to carry out the rehabilitation training to the patient, thereby realize that low limbs rehabilitation robot can help capable according to patient's autonomic motion intention follow-up.

Description

Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning
Technical Field
The invention belongs to the technical field of medical electronics, and relates to a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning.
Background
Lower limb dysfunction caused by stroke greatly reduces mobility and quality of life of patients. The main pathological manifestations of the disease are numbness and weakness of the lower limbs of the patient, and the patient is difficult to take basic actions such as walking on the affected limbs, so the stroke patient usually needs to stand and support by means of auxiliary equipment. At present, rehabilitation hospital mainly relies on the recovered doctor of specialty or protect the worker to carry out the manual work to assist in the line to the patient, and this has greatly aggravated the shortage of medical resources, and for this reason, rehabilitation hospital needs a urgent need not the recovered machine people that helps of nursing doctor alright carry out the recovered line training to the patient to accelerate the patient to the balancing ability and the recovered process of sick limbs.
Currently, the rehabilitation therapy of stroke patients mainly adopts a three-level rehabilitation training therapy method: namely, first-level rehabilitation training, wherein limb placement, body position conversion and affected limb joint movement are carried out within 1 month after illness; second-level rehabilitation training, namely performing standing balance training, walking, dressing and other training within 3 months after illness; and (3) three-level rehabilitation training, wherein daily life capacity training such as dining, limb movement, walking exercise and personal hygiene treatment is performed within 3-6 months after illness, and the compensation function of the healthy limb on the affected limb is mainly exercised. The existing lower limb rehabilitation robot is mainly designed aiming at the requirements of primary and secondary rehabilitation training of stroke patients, is very huge in mechanical structure and high in manufacturing cost, mainly aims at large-scale rehabilitation hospitals and the like, and mainly adopts the principle that an affected limb of a patient is fixed on the mechanical structure in advance, and then the mechanical structure provides power to drive the affected limb fixed on the machine to perform rehabilitation training. However, because of the differences between the physical quality and the degree of stroke affection of stroke patients with lower limb dysfunction, the strength, duration, motion amplitude of affected limb and the requirements of rehabilitation training methods of each patient are very different, and the patients cannot be treated by rehabilitation training with fixed-parameter treatment techniques. This makes dynamic adjustment of the rehabilitation robot according to the real-time requirements of the patient's rehabilitation training a key technology for the rehabilitation robot, however, because of the difference of patients and even the requirements of the same patient in different rehabilitation training time periods are greatly different, the lower limb rehabilitation robot needs to have the capability of predicting the lower limb motion state of the patient in rehabilitation training in real time so as to dynamically follow the autonomous operation rehabilitation training behavior of the patient, however, the existing lower limb rehabilitation robot based on logic table query can only fix one set of parameters and cannot perform dynamic adjustment, and some intelligent lower limb rehabilitation robots based on neural networks such as deep learning in recent years can only perform a large amount of training, the specific rehabilitation method for certain specific patients can be realized, and the strategy of the lower limb rehabilitation robot cannot be learned and updated on line in the rehabilitation training process of the patients. In addition, some existing studies also propose to evaluate the movement of the patient by using a large plantar pressure pad, a pressure sensor mounted on the armrest of the crutch, a 3D movement sensor, a vision camera and the like. However, these methods are generally used in a large-scale rehabilitation room, and the sensors are usually fixed indoors, which is not suitable for an outdoor mobile lower limb rehabilitation robot.
Disclosure of Invention
The invention aims to provide a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning, which models a problem that a lower limb rehabilitation robot makes a walking aid behavior Decision according to the motion state prediction of lower limb limbs during patient rehabilitation training into a Markov Decision Process (MDP) and solves the problem through a deep reinforcement learning algorithm to realize that the lower limb rehabilitation robot can pre-adjust the walking aid behavior of the lower limb rehabilitation robot through the lower limb motion state prediction when the patient operates the rehabilitation robot autonomously, so that the walking aid behavior of the lower limb rehabilitation robot is consistent with the autonomous motion of the patient, namely the walking aid robot actively follows the autonomous motion of the patient, the safety of the lower limb rehabilitation robot is effectively enhanced, and the patient rehabilitation treatment effect is improved.
The invention adopts the technical scheme that a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning specifically comprises the following processes: gather the patient respectively when using low limbs rehabilitation robot to carry out the rehabilitation training, low limbs joint motion information, plantar pressure distribution information and waistband pressure information are as the state value, help capable speed size and direction as the action value with the recovery of low limbs rehabilitation robot to patient, the waistband of patient waist and the mutual pressure of low limbs rehabilitation robot lumbar support are reward punishment value, the model is reinforceed to the construction degree of depth, a recovered action study when using low limbs rehabilitation robot to carry out the rehabilitation training to the patient, thereby realize that low limbs rehabilitation robot can help capable according to patient's autonomic motion intention follow-up.
The invention is also characterized in that:
the plantar pressure distribution information is collected by a flexible plantar pressure sensing device.
The flexible plantar pressure sensor is provided with 99 effective independent sensors; two feet wear a sole pressure sensor respectively.
When the flexible plantar pressure sensor collects plantar pressure information, the plantar pressure distribution data collected each time are two frames, each frame is a numerical matrix of 9 x 11, and at least 10 frames need to be collected every second.
The lower limb joint motion information comprises hip joint motion information, knee joint motion information and ankle joint motion information, the wireless nine-axis attitude sensor is adopted to collect the attitude motion information of hip joints, knee joints and ankle joints of the lower limbs, and the wireless nine-axis attitude sensor is respectively worn at the hip joints, the knee joints and the ankle joints of the lower limbs of the patient; two legs of a human body are worn with 6 wireless nine-axis attitude sensors.
The wireless nine-axis attitude sensor respectively senses the acceleration, the angular acceleration and the angle change of the joint motion of the human body in three vector directions.
The waist belt pressure information is measured by a pressure sensor arranged on the waist rest of the lower limb rehabilitation robot and a pressure sensor arranged on the waist belt.
The number of the pressure sensors on the waist rest is four; the number of sensors on the belt is two.
Pressure values collected by the pressure sensor on the waist rest and the pressure sensor on the waistband are converted into 1024-level numerical value type data through the analog-to-digital conversion circuit 12, and the data are reward and punishment values of the deep reinforcement learning model.
The waist pressure value represents the feedback of the current behavior of the lower limb rehabilitation robot by the environment as a reward and punishment value, the larger the pressure value is, the higher the inconsistency of the behaviors of the lower limb rehabilitation robot and the patient is, the smaller the pressure value is, the higher the consistency of the autonomous behaviors of the lower limb rehabilitation robot and the patient is, and the better the follow-up performance is.
The invention has the following beneficial effects:
1. an edge calculation server is designed, and the running speed of the intelligent algorithm of the lower limb rehabilitation robot is increased.
2. The on-line learning and updating of the lower limb rehabilitation robot are realized through a deep reinforcement learning algorithm.
3. The problem of the aassessment of cerebral apoplexy patient in using recovered walking vehicle in-process is solved for intelligent recovered robot system forms the closed loop, realizes following the self-adaptation of patient's initiative rehabilitation training intention.
Drawings
FIG. 1 is a flow chart of a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the invention;
FIG. 2 is a prototype and functional diagram of a lower limb rehabilitation robot used in the present invention;
FIGS. 3(a) and (b) are diagrams of flexible plantar pressure sensors used in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 4 is a diagram of a training specialized shoe with installed plantar pressure sensors and ankle posture sensors;
FIGS. 5(a) and (b) are views of the joint posture sensor and its wearing;
FIG. 6 is a diagram of lumbar support, belt and lumbar pressure sensors in the lower limb rehabilitation robot;
FIG. 7 is a joint posture data acquisition flow in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning of the invention;
FIG. 8 is a flow chart of cloud computing processing and analysis in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 9 is a flow of acquiring and processing plantar pressure data in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 10 is a flow chart of edge calculation data processing in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 11 is a layout diagram of a waist pressure sensor in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 12 is a technical route of a lower limb rehabilitation robot follow-up control method in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 13 is a block diagram of a deep reinforcement learning algorithm for lower limb rehabilitation robot follow-up control in a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to the present invention;
FIG. 14 is a deep network model used by a deep reinforcement learning algorithm in the lower limb rehabilitation robot follow-up control method based on deep reinforcement learning of the present invention;
fig. 15 is a graph showing a comparison of the results of the lower limb rehabilitation robot follow-up control experiment.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. The invention discloses a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning, which comprises the following steps that as shown in a flow chart of fig. 1, when a patient presses a system starting switch of a lower limb rehabilitation robot, the lower limb rehabilitation robot firstly carries out equipment self-checking and parameter initialization, then an embedded system of the lower limb rehabilitation robot firstly displays the current system state and system default settings to the patient through a man-machine interaction system, and starts follow-up rehabilitation training after the patient clicks a training starting button on a screen. In the rehabilitation training, the lower limb rehabilitation robot system collects lower limb movement information of a patient when the patient uses a lower limb rehabilitation walker to train through six nine-axis sensors, a sole pressure sensor and a waist rest and waistband part pressure sensor which are worn on hip joints, knee joints and ankle joints of the patient, processes and analyzes the collected movement data by constructing a deep reinforcement learning model on a raspberry group, the input movement data is processed by the deep reinforcement learning model and then the prediction of the action intention (movement speed and direction) of the patient is output, and then the behavior of the lower limb rehabilitation robot is pre-adjusted by a bottom layer driving system according to the prediction result of the deep reinforcement learning model, so that the following of the active training behavior of the patient is realized. Fig. 2 is a model machine and a functional diagram of the lower limb rehabilitation robot. The method specifically comprises the following steps: step 1, acquiring motion data of a patient; the flexible plantar pressure sensor (LEGACT, FS-INS-W99) adopted by the invention has 99 effective independent sensors, and one sensor has 99 x 2 and 198 effective independent pressure sensing points, as shown in figure 3(a) is a plantar pressure sensor suite; figure 3(b) is a sensor with a wearable shell). Before the patient uses the lower limb rehabilitation robot to carry out rehabilitation training, firstly, special training shoes (shown in figure 4) with corresponding insoles are replaced; then, wearing wireless nine-axis attitude sensors (WIT, WIFI901) of hip joints, knee joints and ankle joints of the lower limbs, and connecting WIFI901 (as shown in fig. 5, fig. 5(a) is a joint attitude sensor, and fig. 5(b) is a joint attitude sensor wearing schematic diagram)) with a local server by adopting WIFI wireless communication, wherein the sensors can quickly sense acceleration, angular acceleration and angle change of human joint motion in three vector directions (9 data volumes are collected in total), so that one hip joint, knee joint and ankle joint (6 in total) of the lower limbs of the human body are respectively worn; finally, the professional nursing staff wears the fixing waistband of the upper and lower limb rehabilitation robot for the patient (as shown in fig. 6, the pressure sensor is arranged on the waist rest and used for measuring the interaction force of the patient and the lower limb rehabilitation robot in the horizontal direction when the patient walks, meanwhile, the pressure sensor is also arranged below the waist rest and used for analyzing the gravity and the supporting force of the patient), and the professional nursing staff fixes the patient and the waist rest of the lower limb rehabilitation robot together through the waistband, so that the lower limb weakness of the patient can be effectively prevented, the patient can stand unstably and fall down, and the safety performance is improved; subsequently, the patent nursing staff will introduce the basic operation of the lower limb rehabilitation robot for the patient, including pressing an emergency brake button on a panel in case of emergency; after the lower limb rehabilitation robot is ready, a patient presses down a start key of the lower limb rehabilitation robot, the system carries out self-checking, and after the wireless connection states of the system, a sole pressure insole and 6 attitude sensors are checked to be normal, the master control system is operated.
Step 2, preprocessing the motion data of the patient;
step 2.1, collecting motion information of lower limb joints; the lower limb rehabilitation robot needs to actively follow the behavior of a patient, so that the lower limb rehabilitation robot needs to be mechanically constructed and adjusted with the behavior of the patient in advance, and the existing deep learning model needs to be embedded to have strong computing capability, so that the lower limb rehabilitation robot has great time delay in computing if the mechanical control of the lower limb rehabilitation robot and the patient behavior prediction algorithm are implemented in the embedded device, and therefore, the problem that sensor data with small data volume is sent to cloud service for task unloading through a cloud computing technology can be considered; in addition, because the plantar pressure distribution data acquired each time are two frames, each frame is a numerical matrix of 9 × 11, at least 10 frames need to be acquired every second, if the huge data are all uploaded to a cloud server for processing, great communication time delay is caused, and follow-up control cannot be realized. In order to solve the problems, the lower limb joint movement data with small data volume and high calculation complexity are sent to a cloud server, and the lower limb movement is analyzed on the cloud server; the sole pressure distribution data with large data volume and low calculation overhead are operated on an edge calculation server (raspberry group), a deep reinforcement learning model is built on the raspberry group, and the calculation results of the cloud server and the edge server are processed to realize the adjustment of the lower limb rehabilitation robot rehabilitation strategy.
As shown in fig. 7, according to the characteristics that the data amount acquired by the 9-axis motion attitude sensor is small, but the calculation cost for analyzing the data is large (historical data needs to be considered), and the data transmission delay is extremely small, the data is uploaded to the big data cloud computing center server through the TCP communication protocol, the time series model is rapidly calculated and analyzed by using the extremely strong computing capability of the cloud server, so that the motion behavior of the lower limb of the patient is predicted, and the result is rapidly sent to the embedded system of the lower limb rehabilitation robot for synchronous integration, and the specific flowchart is shown in fig. 8.
As shown in fig. 8, after receiving the lower limb joint posture data (9 × 6, recombined into a numerical matrix of 54 × 1) at the current time, the big data center cloud server extracts the lower limb joint posture data (54 × 1) at the previous time from the database and splices the data together into a numerical matrix of 108 × 1, and inputs the data matrix into an input layer of a deep learning network, wherein the deep learning network is a network structure composed of the input layer, a hidden layer and an output layer. The input layer is used for processing input data (the input layer of the neural network has the size of 108 x 1); each layer of the hidden layer comprises a plurality of neurons, and corresponding characteristics can be calculated (the hidden layer of the neural network has two layers, the size of the first layer is 80 x 1, and the size of the second layer is 32 x 1); the output layer maps the content output by the hidden layer to the prediction type (the input layer size of this neural network is 6 x 1). The neural network calculation process comprises initialization network weight and neural network threshold, forward propagation and backward propagation. A neural network initialization network weight value and a threshold value use a randomization method; forward propagation is to calculate the input and output of hidden layer neurons and output layer neurons layer by layer; and the back propagation is carried out, and the weight value and the threshold value are corrected according to the loss function. The penalty function is the true or false of the prediction class and the true class.
Figure BDA0003403313080000061
Wherein L represents a calculation function; y represents the true value of the sample; f (X) represents the model prediction value.
The model data adopts 80% of samples as training data and 20% of data as test data, the model is trained in a multi-round training mode, and finally the accuracy of the model meets the requirement of accurately identifying the current sitting posture of a user.
The gait prediction has 6 types of variables, namely, the horizontal swing direction of the left limb (front 0 and back 1), the horizontal swing direction of the right limb (front 0 and back 1), whether the left limb swings normally (whether 0 is 1), whether the right limb swings normally (whether 0 is 1), whether the motion of the left limb at the next moment is normal (whether 1 is 0), and whether the motion of the left limb at the next moment is normal (whether 1 is 0). These 6 categories cover the analysis of the current moment of the lower limb's movement and the prediction of the next moment of movement.
Step 2.2, collecting plantar pressure distribution information;
as shown in fig. 9, since the flexible array of pressure sensors integrates a large number of high-precision pressure sensors, and the refresh rate is very fast, the amount of data collected by the flexible array of plantar pressure sensors is very large. If all data are transmitted to the cloud server for processing, great time delay is generated, and the requirement of the lower limb rehabilitation robot on real-time evaluation of the gait of the patient cannot be met. In order to solve the problems, the invention realizes the local processing and analysis of the acquired data on the raspberry by deploying the edge computing device on the rehabilitation robot and establishing a link with the plantar pressure sensing array through 2.4G wireless communication.
As shown in fig. 10, first, flexible plantar pressure sensors (LEGACT, FS-INS-W99) are used to collect plantar pressure (10 frames are collected for the left and right feet per second), one frame of data is the pressure values of 99 sensors on the collected pressure pad (the left and right feet are two frames, each of 99 sensor points), then an acquisition circuit module matched with the FS-INS-W99 sensor is used to perform analog-to-digital conversion on the collected sensors to obtain the numerical pressure values of the left and right soles, and two numerical matrices (9 × 11 × 2) are reconstructed; finally, the two values are sent to the edge machine device (raspberry pi) through the 2.4G wireless communication module of the circuit module. The raspberry pi receives data and inputs the data into an input layer of a deep convolutional network, and the deep convolutional network is a network structure consisting of the input layer, a hidden layer, an output layer and a convolutional kernel. The input layer is used for processing input data (the input layer of the neural network has the size of 198 x 1); each hidden layer comprises a plurality of neurons and can calculate corresponding characteristics (the hidden layer of the neural network has two layers, the size of the first layer is 120 x 1, and the size of the second layer is 52 x 1); the output layer maps the content output by the hidden layer to a prediction type (the input layer of the neural network has the size of 6 x 1); the convolution kernel serves to perform feature aggregation on the data (the convolution kernel of this neural network is 1 × 1). The neural network calculation process comprises initialization network weight and neural network threshold, forward propagation and backward propagation. A neural network initialization network weight value and a threshold value use a randomization method; forward propagation is to calculate the input and output of hidden layer neurons and output layer neurons layer by layer; and the back propagation is carried out, and the weight value and the threshold value are corrected according to the loss function. The penalty function is the true or false of the prediction class and the true class.
Figure BDA0003403313080000071
L represents a calculation function; y represents the true value of the sample; f (X) represents the model prediction value.
The model data adopts 80% of samples as training data and 20% of data as test data, the model is trained in a multi-round training mode, and finally the accuracy of the model meets the requirement of accurately identifying the current sitting posture of a user.
The gait prediction has 5 types of variables, namely, a stepping foot (left 0 and right 1), a gravity center X direction position (front 0 and back 1 of a sole), a gravity center Y direction position (left 0 and back 1 of the sole), whether the body can be supported (1 or 0) and whether the stepping range is normal (1 or 0). These 5 categories cover the key decisions of plantar pressure analysis and prediction.
Step 2.3, collecting waist pressure information; the arrangement of the waist pressure sensors is as shown in fig. 11, the number of the waist pressure sensors is 6, the number of the waist pressure sensors is four, the four waist pressure sensors are used for measuring the interaction force between a patient and a waist of the lower limb rehabilitation robot, the two waist pressure sensors are used for measuring the pressure of the patient in the vertical direction, the sampling circuit is a general 12-bit analog-to-digital conversion circuit, data collected by the pressure sensors can be converted into 1024-level numerical value type data, and the data is used for Reward and punishment values (Reward) of the deep reinforcement learning model.
Step 3, a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning; as shown in fig. 12, the joint movement information, the plantar pressure distribution information and the lumbar pressure information of the lower limb of the patient are integrated to form a State value (State) to represent the observation of the system on the current environment, the lower limb rehabilitation robot master control system sends a control signaling (speed and direction) to the bottom layer drive of the lower limb rehabilitation robot, the speed and direction of the lower limb rehabilitation robot are realized by the bottom layer drive through a universal motor driver, that is, the motor drive system writes a speed value into the motor driver, the motor controls the rotating speed and the torque of the motor at the speed) to serve as a behavior value (Action) to represent the behavior of the lower limb rehabilitation robot on the environment, the lumbar pressure value as a Reward punishment value (Reward) represents the feedback of the behavior of the current lower limb rehabilitation robot by the environment, the larger the pressure value represents the higher inconsistency of the behavior of the lower limb rehabilitation robot and the patient, the smaller the pressure value is, the higher the consistency of the lower limb rehabilitation robot and the autonomous behavior of the patient is, namely the better the follow-up performance is.
Step 4, a lower limb rehabilitation robot deep reinforcement learning algorithm model; as shown in fig. 12, the invention models the follow-up control problem of the lower limb rehabilitation robot as an MDP problem, that is, the output result (5 variables) of the depth network after being collected and processed by the sole pressure sensors, the result (6 variables) after being collected and processed by the posture sensor of the lower limb joint motion and the 6 pressure sensors (6 variables) of the waist are taken as the State (State, 17 dimensions) of the MDP model, the speed and direction of the lower limb rehabilitation robot are taken as the behavior (Action, 2 dimensions), and the 6 pressure values of the waist of the lower limb rehabilitation robot are taken as the Reward and punishment (Reward, 6 dimensions). The MDP model is a time sequence decision problem, and is difficult to solve in a limited state space, so the invention provides a deep reinforcement learning method for solving;
fig. 13 shows a principle of a depth-enhanced learning algorithm of the lower limb rehabilitation robot according to the present invention, where the environment is an experimental environment composed of the lower limb rehabilitation robot, a patient, and an experimental field, s represents a current state variable observed by the lower limb rehabilitation robot system, a represents a behavior variable of the lower limb rehabilitation robot, and represents a reward and penalty value variable s' of the lower limb rehabilitation robot represents a state variable at the next time, (s, a) represents a (state _ behavior) group, Q (s, a; θ) represents a Q value output by a current depth-enhanced learning network, and argmaxQ (s, a; θ) is a maximum Q value when a current depth-enhanced learning network parameter is θ. The loss function is defined as:
Loss=(r+γmaxa′Q(s′,a′;θ′)-Q(s,a;θ))2
where γ is the learning rate.
The deep network model in which the main network and the target network in the deep reinforcement learning are common as shown in fig. 13 is shown in fig. 14, in which the input layer is set in a state space (17 dimensions) and the output layer is set in a behavior space (2 dimensions). The main network is used for updating the real-time parameters, the target network does not update and learn the network, and the parameters of the main network are copied to the target network every N time slots, so that parameter iteration is realized.
The deep reinforcement learning algorithm, like the deep neural network algorithm, is a consensus algorithm in the current machine learning field, and the algorithm model and the learning and updating method thereof are not contents applied by the invention.
The invention has the characteristics that: 1: modeling the follow-up control problem of the lower limb rehabilitation robot as an MDP problem, 2: the MDP model is solved by a depth enhancement problem. The specific solving process is to define the state space (S, 17-dimensional variable), the behavior space (a, 2-dimensional) and the reward and punishment space (R, 6-dimensional) of the problem, and define the solving problem as an objective function:
r*=argminaEAR(s,a)
namely, the problem solution is to find the minimum r under (s, a), namely when the observed state of the lower limb rehabilitation robot is s, the behavior a is adopted, the minimum waist pressure can be obtained, namely the lower limb rehabilitation robot is consistent with the active walking of the patient, and the following effect is good.
The invention relates to a lower limb rehabilitation robot follow-up control method based on deep reinforcement learning, which is characterized in that after the lower limb rehabilitation robot is initialized, a system sends an instruction to enable a patient wearable joint sensor, a sole pressure sensor and a waist pressure sensor to collect lower limb joint motion information, sole pressure distribution and waist pressure information of the patient, and filtering processing is carried out on the collected result; then the lower limb rehabilitation robot calculates and analyzes plantar pressure data and lower limb joint data through edge calculation and cloud calculation cooperative unloading, then a follow-up control problem of the lower limb rehabilitation robot is modeled into an MDP problem, namely, an output result (5 variables) of a depth network after the plantar pressure sensors are collected and processed, a result (6 variables) after the posture sensors of lower limb joint movement are collected and processed and 6 pressure sensors (6 variables) of the waist are used as the State (State, 17-dimensional) of an MDP model, the speed and the direction of the lower limb rehabilitation robot are used as behaviors (Action, 2-dimensional), 6 pressure values of the waist of the lower limb rehabilitation robot are used as penalties (rewarded, 6-dimensional), a depth-enhanced learning network is established to solve the problems, and the result of the lower limb Action output by the depth-enhanced learning network is the speed and the direction (2-dimensional) which the rehabilitation robot needs to perform follow-up execution on a patient, the result is sent to an STM32 bottom layer driving control system of the lower limb rehabilitation robot, and the servo control of the lower limb rehabilitation robot is realized by controlling a universal motor controller.
The feasibility of the invention patent is verified through experiments, the proposed lower limb rehabilitation robot follow-up control method based on deep reinforcement learning achieves a preset target (follow-up control), and as shown in fig. 15, a line of a straight line and solid black dots is the follow-up control method of the invention. The mean value of the waist pressure of the method is far lower than that of a hollow dot line method of non-deep reinforcement learning, namely when the lower limb rehabilitation robot is consistent with the initiative movement intention of a patient, the mean pressure value between the waist fixing support of the lower limb rehabilitation robot and the waist of the patient is very small, the lower limb rehabilitation robot follows the movement behavior of the patient, and if the waist pressure is very high, the lower limb rehabilitation robot pushes the patient to move forward or blocks the patient to walk independently, so that the risk of secondary injury is very high.

Claims (10)

1. A lower limb rehabilitation robot follow-up control method based on deep reinforcement learning is characterized in that: the method specifically comprises the following steps: gather the patient respectively when using low limbs rehabilitation robot to carry out the rehabilitation training, low limbs joint motion information, plantar pressure distribution information and waistband pressure information are as the state value, help capable speed size and direction as the action value with the recovery of low limbs rehabilitation robot to patient, the waistband of patient waist and the mutual pressure of low limbs rehabilitation robot lumbar support are reward punishment value, the model is reinforceed to the construction degree of depth, a recovered action study when using low limbs rehabilitation robot to carry out the rehabilitation training to the patient, thereby realize that low limbs rehabilitation robot can help capable according to patient's autonomic motion intention follow-up.
2. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 1, characterized in that: the plantar pressure distribution information is acquired by adopting a flexible plantar pressure sensing device.
3. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 2, characterized in that: the flexible plantar pressure sensor is provided with 99 effective independent sensors; two feet wear a sole pressure sensor respectively.
4. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 3, characterized in that: when the flexible plantar pressure sensor collects plantar pressure information, the plantar pressure distribution data collected each time are two frames, each frame is a numerical matrix of 9 x 11, and at least 10 frames need to be collected every second.
5. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 1, characterized in that: the lower limb joint motion information comprises hip joint motion information, knee joint motion information and ankle joint motion information, the wireless nine-axis attitude sensor is adopted to collect the attitude motion information of hip joints, knee joints and ankle joints of the lower limbs, and the wireless nine-axis attitude sensor is respectively worn at the hip joints, the knee joints and the ankle joints of the lower limbs of the patient; two legs of a human body are worn with 6 wireless nine-axis attitude sensors.
6. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 5, characterized in that: the wireless nine-axis attitude sensor respectively senses the acceleration, the angular acceleration and the angle change of the joint motion of the human body in three vector directions.
7. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 1, characterized in that: the waistband pressure information is measured by a pressure sensor arranged on the waist rest of the lower limb rehabilitation robot and a pressure sensor arranged on the waistband.
8. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 7, characterized in that: the number of the pressure sensors on the waist rest is four; the number of sensors on the belt is two.
9. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 7, characterized in that: pressure values collected by the pressure sensor on the waist rest and the pressure sensor on the waistband are converted into 1024-level numerical value type data through the 12-level analog-to-digital conversion circuit, and the data are reward and punishment values of the deep reinforcement learning model.
10. The lower limb rehabilitation robot follow-up control method based on deep reinforcement learning according to claim 7, characterized in that: the waist pressure value represents the feedback of the current behavior of the lower limb rehabilitation robot by the environment as a reward and punishment value, the larger the pressure value is, the higher the inconsistency of the behaviors of the lower limb rehabilitation robot and the patient is, the smaller the pressure value is, the higher the consistency of the autonomous behaviors of the lower limb rehabilitation robot and the patient is, and the better the follow-up performance is.
CN202111506488.1A 2021-12-10 2021-12-10 Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning Active CN114344093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111506488.1A CN114344093B (en) 2021-12-10 2021-12-10 Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111506488.1A CN114344093B (en) 2021-12-10 2021-12-10 Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning

Publications (2)

Publication Number Publication Date
CN114344093A true CN114344093A (en) 2022-04-15
CN114344093B CN114344093B (en) 2023-10-03

Family

ID=81099552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111506488.1A Active CN114344093B (en) 2021-12-10 2021-12-10 Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning

Country Status (1)

Country Link
CN (1) CN114344093B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118588289A (en) * 2024-08-07 2024-09-03 江西求是高等研究院 Mechanical arm rehabilitation data prediction method and system based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011115323A (en) * 2009-12-02 2011-06-16 Nakamura Sangyo Gakuen Walking assist robot
CN109953761A (en) * 2017-12-22 2019-07-02 浙江大学 A kind of lower limb rehabilitation robot sensory perceptual system and motion intention inference method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011115323A (en) * 2009-12-02 2011-06-16 Nakamura Sangyo Gakuen Walking assist robot
CN109953761A (en) * 2017-12-22 2019-07-02 浙江大学 A kind of lower limb rehabilitation robot sensory perceptual system and motion intention inference method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118588289A (en) * 2024-08-07 2024-09-03 江西求是高等研究院 Mechanical arm rehabilitation data prediction method and system based on deep learning

Also Published As

Publication number Publication date
CN114344093B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN111557828B (en) Active stroke lower limb rehabilitation robot control method based on healthy side coupling
CN105796286B (en) Use the lower limb exoskeleton robot control method of air bag sensor
Chaparro-Cárdenas et al. A review in gait rehabilitation devices and applied control techniques
CN114366556B (en) Multimode training control system and method for lower limb rehabilitation
US20180289579A1 (en) Powered Walking Assistant and Associated Systems and Methods
US20100312152A1 (en) Smart gait rehabilitation system for automated diagnosis and therapy of neurologic impairment
CN110507322B (en) Myoelectricity quantitative state evaluation system and method based on virtual induction
US20140100494A1 (en) Smart gait rehabilitation system for automated diagnosis and therapy of neurologic impairment
CN104013513A (en) Rehabilitation robot sensing system and method
CN111588597A (en) Intelligent interactive walking training system and implementation method thereof
Ohnuma et al. Development of JARoW-II active robotic walker reflecting pelvic movements while walking
CN107536613A (en) Robot and its human body lower limbs Gait Recognition apparatus and method
CN111408042A (en) Functional electrical stimulation and lower limb exoskeleton intelligent distribution method, device, storage medium and system
CN105852866A (en) Wearable sensing system and measuring method for knee-joint adduction torque measurement
Dinovitzer et al. Accurate real-time joint torque estimation for dynamic prediction of human locomotion
CN114344093B (en) Lower limb rehabilitation robot follow-up control method based on deep reinforcement learning
Kantharaju et al. Framework for personalizing wearable devices using real-time physiological measures
CN114366557A (en) Man-machine interaction system and method for lower limb rehabilitation robot
Tiseo Modelling of bipedal locomotion for the development of a compliant pelvic interface between human and a balance assistant robot
Chen et al. Mobile robot assisted gait monitoring and dynamic margin of stability estimation
Kanjanapas et al. 7 degrees of freedom passive exoskeleton for human gait analysis: Human joint motion sensing and torque estimation during walking
Chen et al. Control strategies for lower limb rehabilitation robot
CN116531225A (en) Rehabilitation training device, rehabilitation training control method and storage medium
CN114831784A (en) Lower limb prosthesis terrain recognition system and method based on multi-source signals
Maddalena et al. An optimized design of a parallel robot for gait training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant