CN112057212A - Artificial limb system based on deep learning - Google Patents
Artificial limb system based on deep learning Download PDFInfo
- Publication number
- CN112057212A CN112057212A CN202010769312.4A CN202010769312A CN112057212A CN 112057212 A CN112057212 A CN 112057212A CN 202010769312 A CN202010769312 A CN 202010769312A CN 112057212 A CN112057212 A CN 112057212A
- Authority
- CN
- China
- Prior art keywords
- deep learning
- module
- communication module
- control subsystem
- communication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 31
- 238000004891 communication Methods 0.000 claims abstract description 78
- 238000013136 deep learning model Methods 0.000 claims abstract description 16
- 238000012706 support-vector machine Methods 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000002567 electromyography Methods 0.000 claims description 7
- 239000007769 metal material Substances 0.000 claims description 5
- 239000000463 material Substances 0.000 claims description 3
- 239000011347 resin Substances 0.000 claims description 3
- 229920005989 resin Polymers 0.000 claims description 3
- 230000003183 myoelectrical effect Effects 0.000 abstract description 12
- 230000000694 effects Effects 0.000 abstract description 6
- 210000003414 extremity Anatomy 0.000 description 22
- 230000006870 function Effects 0.000 description 17
- 238000000034 method Methods 0.000 description 12
- 238000011161 development Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 8
- 230000000875 corresponding effect Effects 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013145 classification model Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000000245 forearm Anatomy 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 210000001145 finger joint Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000009131 signaling function Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/50—Prostheses not implantable in the body
- A61F2/68—Operating or control means
- A61F2/70—Operating or control means electrical
- A61F2/72—Bioelectric control, e.g. myoelectric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/50—Prostheses not implantable in the body
- A61F2/68—Operating or control means
- A61F2/70—Operating or control means electrical
- A61F2002/704—Operating or control means electrical computer-controlled, e.g. robotic control
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Transplantation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Vascular Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Cardiology (AREA)
- Veterinary Medicine (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Prostheses (AREA)
Abstract
The embodiment of the invention provides a prosthesis system based on deep learning, which comprises a myoelectric sensor, a control subsystem, a first communication module, a second communication module and a prosthesis component, wherein the myoelectric sensor is connected with the control subsystem; the control subsystem comprises an acquisition unit and an identification unit, the acquisition unit is used for acquiring the electromyographic signals acquired by the electromyographic sensor, and the identification unit is used for identifying the electromyographic signals based on a deep learning model to obtain an identification result; the first communication module is used for sending the identification result to the artificial limb assembly, and the second communication module is used for connecting the control subsystem with terminal equipment in a communication mode. The embodiment of the invention can expand the application range of the artificial limb system and improve the operation effect of the artificial limb system.
Description
Technical Field
The invention relates to the technical field of artificial limbs, in particular to an artificial limb system based on deep learning.
Background
Prosthesis generally refers to an artificial prosthesis specifically designed and manufactured to be assembled to compensate for an amputee or a limb with incomplete loss of the limb. As technology advances, prostheses can perform more and more actions as the user wishes; taking the myoelectric prosthesis as an example, the myoelectric prosthesis can identify the collected myoelectric signals of the user and generate corresponding identification results so as to further control the actuating mechanism to perform corresponding actions; in the prior art, the myoelectric signals collected locally need to be sent to a cloud server for identification, and then the identification result is sent to the local, so that the operation of the myoelectric artificial limb is easily affected by the conditions of a communication network, and the operation effect is poor.
Disclosure of Invention
The embodiment of the invention provides an artificial limb system based on deep learning, which aims to solve the problems that the operation of the existing myoelectric artificial limb is easily influenced by communication network conditions and has poor operation effect.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a prosthesis system based on deep learning, including an electromyography sensor, a control subsystem, a first communication module, a second communication module, and a prosthesis assembly;
the control subsystem comprises an acquisition unit and an identification unit, the acquisition unit is used for acquiring the electromyographic signals acquired by the electromyographic sensor, and the identification unit is used for identifying the electromyographic signals based on a deep learning model to obtain an identification result;
the first communication module is used for sending the identification result to the artificial limb assembly, and the second communication module is used for connecting the control subsystem with terminal equipment in a communication mode.
The artificial limb system based on deep learning provided by the embodiment of the invention comprises an electromyographic sensor, a control subsystem, a first communication module, a second communication module and an artificial limb assembly; the control subsystem comprises an acquisition unit and an identification unit, the acquisition unit is used for acquiring the electromyographic signals acquired by the electromyographic sensor, the identification unit is used for identifying the electromyographic signals based on a deep learning model to obtain an identification result, and the first communication module is used for sending the identification result to the artificial limb assembly; meanwhile, the second communication module is used for carrying out communication connection on the control subsystem and the terminal equipment, and the artificial limb system based on deep learning has the secondary development capability.
Drawings
FIG. 1 is a schematic structural diagram of a deep learning-based prosthesis system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of another deep learning-based prosthesis system provided by an embodiment of the invention;
fig. 3 is a schematic structural diagram of a deep learning-based prosthesis system according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments. In the following description, specific details such as specific configurations and components are provided only to help the full understanding of the embodiments of the present invention. Thus, it will be apparent to those skilled in the art that various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and similar terms in the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. Also, the use of the terms "a" or "an" and the like do not denote a limitation of quantity, but rather denote the presence of at least one.
As shown in fig. 1, the prosthesis system based on deep learning according to the embodiment of the present invention includes an electromyography sensor 110, a control subsystem 120, a first communication module 150, a second communication module 160, and a prosthesis assembly 130;
the control subsystem 120 comprises an acquisition unit 121 and an identification unit 122, wherein the acquisition unit 121 is used for acquiring the electromyographic signals acquired by the electromyographic sensor 110, and the identification unit 122 is used for identifying the electromyographic signals based on a deep learning model to obtain an identification result;
the first communication module 150 is configured to send the identification result to the prosthesis assembly 130, and the second communication module 160 is configured to communicatively connect the control subsystem 120 with a terminal device.
The electromyographic sensor 110 may be used to collect electromyographic signals generated by the contraction of limb muscles that produce movement.
The control subsystem 120 includes an identification unit 122, where the identification unit 122 is configured to identify the electromyographic signal based on a deep learning model to obtain an identification result; in other words, the control subsystem 120 may include a deep learning model such as a Support Vector Machine (SVM), a Convolutional Neural Network (CNN) based recognition model, and the like. It is easy to understand that the deep learning model is usually obtained by training an original deep learning model using training samples, and can perform recognition in the form of classification or the like according to an input electromyographic signal, and finally obtain a recognition result.
Combined with the application scenario, the control subsystem 120 may recognize the action intention or gesture of the user attached with the electromyography sensor 110, and obtain a corresponding recognition result.
The deep learning model used in the control subsystem 120 may be obtained by directly training in the control subsystem 120, or may be obtained by writing corresponding kernel data or discriminator parameters into the control subsystem 120 after the training in the terminal device is completed, which is not limited herein.
In this embodiment, the prosthetic component 130 is in communication connection with the control subsystem 120 through the first communication module 150, in other words, in this embodiment, the control subsystem 120 located locally can obtain an identification result according to the myoelectric signal, and directly communicate with the prosthetic component 130 through the first communication module 150 according to the identification result, so as to control the prosthetic component 130, avoid sending the myoelectric signal to the cloud server for identification, and realize a process of controlling the prosthetic component 130 based on communication between the cloud server and the prosthetic component 130. That is to say, compared with the existing prosthesis, the prosthesis system based on deep learning in this embodiment can realize the function of offline real-time identification, thereby getting rid of the limitation that auxiliary identification must be performed by combining with a cloud server.
Of course, in this embodiment, the control subsystem 120 is further connected to the terminal device 200 through the second communication module 160, where the terminal device 200 may be at least one of a personal computer, a mobile terminal, or a cloud server.
As described above, the terminal device 200 may not be necessary for recognizing the electromyographic signal, however, in some possible embodiments, if the communication condition allows, and in order to obtain a more precise recognition result, the electromyographic signal may be optionally transmitted to the terminal device 200 for recognition. Of course, in other possible embodiments, the control subsystem 120 may just send some data of the type such as myoelectric signals, identification results from local identification, or motion data collected from the prosthetic component 130 to the terminal device 200 for presentation; alternatively, the control subsystem 120 may also receive update packet data and the like sent by the terminal device through the second communication module 160. That is, the second communication module 160 is provided to help enable the deep learning-based prosthesis system to have the capability of secondary development.
The prosthesis system based on deep learning provided by the embodiment of the invention comprises an electromyography sensor 110, a control subsystem 120, a first communication module 150, a second communication module 160 and a prosthesis assembly 130; the control subsystem 120 includes an obtaining unit 121 and an identifying unit 122, where the obtaining unit 121 is configured to obtain an electromyographic signal collected by the electromyographic sensor 110, the identifying unit 122 is configured to identify the electromyographic signal based on a deep learning model to obtain an identification result, and the first communication module 150 is configured to send the identification result to the prosthetic component 130, in other words, in the embodiment of the present invention, the collection and identification of the electromyographic signal may be locally completed, and the identification result is sent to the prosthetic component 130 to further implement control on the prosthetic component 130, instead of having to rely on a terminal device such as a cloud server to perform identification on the electromyographic signal, which is beneficial to implementing offline identification, expanding an application range of the prosthetic system, and improving an operation effect of the prosthetic system; meanwhile, the second communication module 160 is used for communicatively connecting the control subsystem 120 with the terminal device, which is helpful for enabling the deep learning-based prosthesis system to have the capability of secondary development.
Optionally, the control subsystem 120 is a neural network processor (KPU). The KPU generally refers to a processor integrated with an artificial neural network, and may be used to accelerate the SVM recognition algorithm or the recognition algorithm based on CNN, so as to shorten the processing and recognition time for the electromyographic signals, and further improve the operation effect of the artificial limb system based on deep learning.
In one example, the control subsystem 120 may be a K210 chip.
The K210 chip is generally a RISC-V DUAL 64bit kernel, the dominant frequency reaches 400MHz and can reach 600MHz at most, and the K210 chip has 8M memory, the problems of processing capacity and memory are solved by using the K210 chip as a neural network processor, and meanwhile, the SVM recognition algorithm or the recognition algorithm based on the neural network can be accelerated, so that the time of data processing and gesture recognition can be shortened.
In an example, the identification unit 122 is specifically configured to identify the electromyographic signals based on a support vector machine, so as to obtain an identification result.
The SVM can perform tasks better in non-linear and high-dimensional tasks using a kernel technique and a maximum margin concept, and thus, a relatively accurate recognition result can be obtained by recognizing the electromyogram signal based on the SVM in the present example.
The control subsystem 120 further includes an updating unit 123, where the updating unit 123 is configured to update parameters of the deep learning model according to the update data received by the second communication module 160 from the terminal device.
It is easy to understand that, for a terminal device such as a cloud server, on one hand, the terminal device may have a stronger computing capability, and on the other hand, the terminal device may also be connected to different control subsystems 120 to obtain more raw data; therefore, the SVM therein can be better trained by the terminal device to obtain various parameters of the updated SVM, and update data is generated based on the various updated parameters and sent to the control subsystem 120.
The second communication module 160 of the prosthesis system side based on deep learning receives the update data from the terminal device, and the control subsystem 120 may update the parameters of the local SVM according to the update data.
Generally, the main parameters of the SVM include a penalty coefficient c and a parameter g of the kernel function. In addition, for the neural network, there may be corresponding network parameters, and when the recognition algorithm based on the neural network is written in the control subsystem 120, the network parameters may also be updated according to the relevant update parameters from the terminal device, which is not described herein again.
In this embodiment, by providing the updating unit, the control subsystem 120 is facilitated to update parameters of the deep learning model according to the update data from the terminal device, so that the recognition accuracy of the control subsystem 120 on the electromyographic signals is improved; in addition, the deep learning model can be optimized without directly training the deep learning model at the local position of the control subsystem 120, and the convenience of the parameter updating process is improved.
In one example, the second communication module 160 includes at least one of a WiFi module, a 5G module, a 4G module, and a 3G module.
In some possible application examples, the second communication module 160 may use a WiFi chip of the ESP8285 to connect to the wireless network to access to the internet, and uses HTTP as a main communication protocol and WebSocket as an auxiliary communication protocol at the back end, and uses Java Spring as a back-end development framework, and uses MongoDB as a database to upload data to the network end periodically for storage, and perform information interaction with the APP established in the QT development environment in the mobile terminal 200.
In one example, the first communication module 150 is at least one of a bluetooth module and a ZigBee module. Of course, in some possible application scenarios, the first communication module 150 may also be a wireless communication module such as an LPWAN module, or a wired communication module such as a serial communication interface module, a parallel communication interface module, and an optical fiber communication module.
Optionally, the prosthesis assembly 130 includes a main control chip 131 and a memory chip 132, the main control chip 131 is electrically connected to the memory chip 132, and the main control chip 131 is further electrically connected to the first communication sub-module 151 included in the first communication module 150;
the second communication sub-module 152 included in the first communication module 150 is electrically connected to the control subsystem 120, and the second communication sub-module 152 is matched with the first communication sub-module 151.
In this embodiment, the main control chip 131 may communicate with the control subsystem 120, for example, in a possible application scenario, the main control chip 131 may generate an instruction for controlling an associated actuator in the prosthesis assembly 130 according to the received recognition result from the control subsystem 120.
The memory chip 132 may be used to store various types of motion data, such as motion data of the actuator.
In one example, the main control chip 131 may adopt an STM32F103RBt6 chip, which is externally connected with a memory chip 132 of a model 25Q 64.
As shown in fig. 3, in a specific application scenario, the first communication module 150 includes a first communication sub-module 151 and a second communication sub-module 152. The first communication sub-module 151 and the second communication sub-module 152 are electrically connected to the control sub-system 120 and the main control chip 131, respectively.
Taking the first communication module 150 as a bluetooth module as an example, the first communication sub-module 151 and the second communication sub-module 152 may be FSC-BT826EN bluetooth modules.
Optionally, as shown in fig. 2, the above-mentioned deep learning-based prosthesis system further includes an analog-to-digital sensor 140, and the electromyographic sensor 110 is electrically connected to the control subsystem 120 through the analog-to-digital sensor 140.
Alternatively, the number of the electromyographic sensors 110 is plural. By combining practical application, more than two electromyographic sensors 110 can be arranged on the limb of the user, so that redundant acquisition of electromyographic signals can be realized, and the reliability of the artificial limb system based on deep learning is enhanced; or, the corresponding electromyographic sensors 110 are arranged at different positions, and identification is performed based on the electromyographic signals corresponding to the different positions, so that the accuracy of the identification result is improved.
In a specific application scenario, as shown in fig. 2, the electromyography sensor 110 may be an OYMotion sensor and is used to collect the electromyography signal of the surface of the forearm of the user. The method comprises the steps of collecting forearm surface electromyographic signals by using two OYMotion sensors, converting analog signals into digital signals through an AD7705, and sending data to a K210 processor by using an SPI bus to process the data.
Optionally, the prosthesis assembly 130 includes a prosthesis body including a resin material portion and a metal material portion, the metal material portion including a movable joint member and a primary force receiving member.
Combining a specific application scene, the artificial limb body can be a myoelectric artificial limb palm, most of the mechanical structure of the artificial limb body is completed by adopting a 3D printing technology, parts with low strength requirements are built by adopting resin as materials, and movable joints and stress points are made of metal materials; thereby reducing the manufacturing cost of the artificial limb body and ensuring the practicability.
The hardware topological structure of the artificial limb system based on deep learning provided by the embodiment of the invention is further improved in structure or method, and the following working process can be realized:
the method comprises the steps of collecting a forearm surface electromyographic signal by using two OYMotion sensors, converting an analog signal into a digital signal through an AD7705, sending data to a K210 processor by adopting an SPI bus, eliminating power supply noise in original data by using a 50Hz common frequency trap in the K210, processing the data by using a 20Hz high-pass filter and a 150Hz low-pass filter, and extracting the electromyographic signal of 20Hz-150 Hz.
And in the two cases, the mean value of the electromyographic signals is different, and the variance of the electromyographic signals is also different. Each action comprises 30 groups of data, one thousand feature vectors and one line of data, wherein part of line data is extracted and integrated together to be used as a training set, and the rest is used as a test set. Normalizing the training set to the range of [ -1, 1] according to each dimension, recording normalized mapping, selecting a Radial Basis Function (RBF) to search for optimal parameters c (penalty coefficient) and g (kernel Function parameter) by utilizing network optimization, generating a classification model by utilizing the training set by utilizing a libsvmtrain Function, extracting the generated classification model after reaching a target recognition rate, and transplanting the classification model to a K210 memory as a structural body.
After the collected muscle electric signals between 20Hz and 150Hz are extracted through characteristic values, the muscle electric signals are operated with trained SVM kernel data written in a K210 memory to realize off-line real-time identification, and the Baud rate is set to 115200 through a serial port of the K210 by using an FSC-BT826EN Bluetooth module to realize short-distance communication between main control and a mechanical palm.
Adopt ESP8285 wiFi chip and K210 serial port line connection, set computer, cell-phone and ESP8285 to same router address, connect this wireless network with ESP8285 and can realize long-range data interaction with computer and cell-phone end, use HTTP as the supplementary communication protocol of main WebSocket at the rear end, adopt Java Spring as rear end development frame, use MongoDB as the database, upload the net end to save data regularly, carry out the information interaction with the APP that the QT development environment was built.
The computer side APP uses the initWidget (); initializing an interface and a serial port by a function, encapsulating some interface functions inside for drawing the interface, connecting a readRead () signal function in the serial port class with a readData () reading data slot function encapsulated by the function by using a connect () function, and storing the read data into a list container to enable chartInit (); and initializing a chart interface, encapsulating an interface function of the chart inside, drawing a coordinate axis and an interface, and drawing the data in the container on the chart interface by using an apend () function in the QCharts class to realize the drawing of the data spectrogram. And performing data transmission operation by using a writeData () function, packaging a write () function of the QSeral class in the function, transmitting the data to the serial port, and waiting for receiving by a lower computer. For uploading files to a server, the software uses a Post method, uses Qt QNetworkAccess manager class, firstly calls setServerAddr () to set a server address, then calls setPostFilePath () to set the position of the uploaded files, finally calls a start interface to start uploading, receives server return data, and enters a replyFinishd () function to process if the return data exists.
The tool used for Android APP development of the mobile phone end can be Android studio, and an OKHttp library of a Square company uploads data acquired from the Android end to a cloud server. The OKHttp library is simpler in interface packaging than the traditionally used HttpURConnection and HttpClient libraries. Firstly, the dependency of OkHttp library is added to the dependencies closure, packages above AS3.0 are referenced by instantiation, and packages below AS3.0 are referenced by help. Then an instance of OkHttpClient is created in the activity, and then a RequestBody object is built to store the parameters to be submitted (such as pictures, json format data, xml format data or txt format data). A Request instance is then created to send the Request, invoking the url () method in the Request. Then the RequestBody object created with parameters is transmitted in, and finally the post () method is called, so that the data can be sent to the server in the form of a request.
The mechanical palm part adopts an STM32F103RBT6 chip as a main control, an SPI communication mode is adopted to connect a 25Q64 memory, a memory is expanded to store a large amount of action data, a serial port is used to be connected with an FSC-BT826EN Bluetooth module to set the Baud rate to be 115200, communication protocols of two check sites of {0xAA,0xAA, data, … and data } and a 6-bit data position are utilized to communicate with the main control, a high-torque metal steering engine is adopted to provide power for a mechanical finger, a finger joint part realizes multi-degree-of-freedom movement by a connecting rod transmission mode, and an IIC interface and an analog-to-digital conversion interface are reserved for subsequent continuous development.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. A prosthesis system based on deep learning is characterized by comprising an electromyography sensor, a control subsystem, a first communication module, a second communication module and a prosthesis component;
the control subsystem comprises an acquisition unit and an identification unit, the acquisition unit is used for acquiring the electromyographic signals acquired by the electromyographic sensor, and the identification unit is used for identifying the electromyographic signals based on a deep learning model to obtain an identification result;
the first communication module is used for sending the identification result to the artificial limb assembly, and the second communication module is used for connecting the control subsystem with terminal equipment in a communication mode.
2. The deep learning-based prosthetic system of claim 1, wherein the control subsystem is a neural network processor.
3. The deep learning-based prosthetic system of claim 2, wherein the neural network processor is a K210 chip.
4. The deep learning based prosthetic system of any one of claims 1-3, wherein the identification unit is specifically configured to identify the electromyographic signals based on a support vector machine, resulting in an identification result.
5. The deep learning-based prosthetic system of claim 1, wherein the control subsystem further comprises an update unit for updating parameters of the deep learning model based on update data received by the second communication module from the terminal device.
6. The deep learning-based prosthesis system according to claim 1, wherein the first communication module is at least one of a bluetooth module and a ZigBee module.
7. The deep learning-based prosthetic system of claim 1, wherein the second communication module comprises at least one of a WiFi module, a 5G module, a 4G module, and a 3G module.
8. The deep learning-based prosthetic system of claim 1, wherein the prosthetic component comprises a main control chip and a memory chip, the main control chip being electrically connected to the memory chip, the main control chip being further electrically connected to a first communication sub-module included in the first communication module;
the second communication sub-module included in the first communication module is electrically connected with the neural network processor, and the second communication sub-module is matched with the first communication sub-module.
9. The deep learning-based prosthetic system of claim 8, wherein the master control chip is an STM32F103RBt6 chip.
10. The deep learning-based prosthesis system of claim 1, wherein the prosthesis assembly includes a prosthesis body including a resin material portion and a metal material portion, the metal material portion including a movable joint member and a primary force receiving member.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010769312.4A CN112057212A (en) | 2020-08-03 | 2020-08-03 | Artificial limb system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010769312.4A CN112057212A (en) | 2020-08-03 | 2020-08-03 | Artificial limb system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112057212A true CN112057212A (en) | 2020-12-11 |
Family
ID=73657658
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010769312.4A Pending CN112057212A (en) | 2020-08-03 | 2020-08-03 | Artificial limb system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112057212A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113730054A (en) * | 2021-09-13 | 2021-12-03 | 桂林电子科技大学 | Method for controlling gripping force of myoelectric artificial limb |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030071971A (en) * | 2002-03-05 | 2003-09-13 | 비티비엑세스코리아(주) | The internet transformation system which uses the carrying bluetooth terminal and the PC which are affixed and service method |
KR20030096146A (en) * | 2003-11-19 | 2003-12-24 | (주)주앤아이시스템 | The method of implementation and system of mobile collaboration using the wireless internet and personal digital assistant |
KR100778799B1 (en) * | 2007-03-28 | 2007-11-23 | 주식회사 데브구루 | Internet access method and system |
CN101987047A (en) * | 2009-08-03 | 2011-03-23 | 深圳先进技术研究院 | Artificial limb control system and method based on voice and myoelectricity information identification |
US20110320668A1 (en) * | 2010-06-28 | 2011-12-29 | Huawei Device Co., Ltd. | Wireless internet access device, sd control chip, and method for data communication |
CN202288542U (en) * | 2011-10-25 | 2012-07-04 | 中国科学院深圳先进技术研究院 | Artificial limb control device |
CN103892945A (en) * | 2012-12-27 | 2014-07-02 | 中国科学院深圳先进技术研究院 | Myoelectric prosthesis control system |
CN204086894U (en) * | 2014-09-23 | 2015-01-07 | 常州信息职业技术学院 | Based on the long-distance intelligent control device of Ethernet and ZigBee |
CN105943206A (en) * | 2016-06-01 | 2016-09-21 | 上海师范大学 | Prosthetic hand control method based on MYO armlet |
CN106236336A (en) * | 2016-08-15 | 2016-12-21 | 中国科学院重庆绿色智能技术研究院 | A kind of myoelectric limb gesture and dynamics control method |
CN108919711A (en) * | 2018-07-11 | 2018-11-30 | 燕山大学 | A kind of remote information interactive system based on built-in Linux |
CN110763927A (en) * | 2018-07-25 | 2020-02-07 | 南京市嘉隆电气科技有限公司 | In-situ equipment debugging method based on wireless communication |
CN111317600A (en) * | 2018-12-13 | 2020-06-23 | 深圳先进技术研究院 | Artificial limb control method, device, system, equipment and storage medium |
-
2020
- 2020-08-03 CN CN202010769312.4A patent/CN112057212A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030071971A (en) * | 2002-03-05 | 2003-09-13 | 비티비엑세스코리아(주) | The internet transformation system which uses the carrying bluetooth terminal and the PC which are affixed and service method |
KR20030096146A (en) * | 2003-11-19 | 2003-12-24 | (주)주앤아이시스템 | The method of implementation and system of mobile collaboration using the wireless internet and personal digital assistant |
KR100778799B1 (en) * | 2007-03-28 | 2007-11-23 | 주식회사 데브구루 | Internet access method and system |
CN101987047A (en) * | 2009-08-03 | 2011-03-23 | 深圳先进技术研究院 | Artificial limb control system and method based on voice and myoelectricity information identification |
US20110320668A1 (en) * | 2010-06-28 | 2011-12-29 | Huawei Device Co., Ltd. | Wireless internet access device, sd control chip, and method for data communication |
CN202288542U (en) * | 2011-10-25 | 2012-07-04 | 中国科学院深圳先进技术研究院 | Artificial limb control device |
CN103892945A (en) * | 2012-12-27 | 2014-07-02 | 中国科学院深圳先进技术研究院 | Myoelectric prosthesis control system |
CN204086894U (en) * | 2014-09-23 | 2015-01-07 | 常州信息职业技术学院 | Based on the long-distance intelligent control device of Ethernet and ZigBee |
CN105943206A (en) * | 2016-06-01 | 2016-09-21 | 上海师范大学 | Prosthetic hand control method based on MYO armlet |
CN106236336A (en) * | 2016-08-15 | 2016-12-21 | 中国科学院重庆绿色智能技术研究院 | A kind of myoelectric limb gesture and dynamics control method |
CN108919711A (en) * | 2018-07-11 | 2018-11-30 | 燕山大学 | A kind of remote information interactive system based on built-in Linux |
CN110763927A (en) * | 2018-07-25 | 2020-02-07 | 南京市嘉隆电气科技有限公司 | In-situ equipment debugging method based on wireless communication |
CN111317600A (en) * | 2018-12-13 | 2020-06-23 | 深圳先进技术研究院 | Artificial limb control method, device, system, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113730054A (en) * | 2021-09-13 | 2021-12-03 | 桂林电子科技大学 | Method for controlling gripping force of myoelectric artificial limb |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11587242B1 (en) | Real-time processing of handstate representation model estimates | |
Bannach et al. | Rapid prototyping of activity recognition applications | |
CN108703824B (en) | Bionic hand control system and control method based on myoelectricity bracelet | |
EP3246133B1 (en) | Control system | |
CN109215774A (en) | Real time remote diagnosis by feeling the pulse system | |
CN112057212A (en) | Artificial limb system based on deep learning | |
KR102506222B1 (en) | Method and system for collection of vision data, learning ,distribution and inference | |
KR20210041483A (en) | Method for generating learning data of artificial intelligence model and apparatus therefor | |
WO2019180434A1 (en) | Processing a command | |
CN213722673U (en) | Prosthesis system | |
CN109571494A (en) | Emotion identification method, apparatus and pet robot | |
He et al. | Voice and motion-based control system: Proof-of-concept implementation on robotics via internet-of-things technologies | |
CN107914273A (en) | Mechanical arm teaching system based on gesture control | |
CN113894779B (en) | Multi-mode data processing method applied to robot interaction | |
CN214480701U (en) | Cloud robot control system based on Web webpage | |
CN110996286A (en) | Vehicle control method and system based on wireless sensor | |
CN115984533A (en) | Capture method, system, computing device, and computer storage medium | |
CN116206728A (en) | Rehabilitation training method and system based on sensor fusion and transfer learning | |
CN114971219A (en) | Multi-view-angle human factor dynamic evaluation method and system based on augmented reality | |
CN111230872B (en) | Object delivery intention recognition system and method based on multiple sensors | |
CN118559715A (en) | Mechanical arm control system and method based on brain-computer interface | |
CN118893634A (en) | Model training method and device and mechanical arm system | |
US20240119628A1 (en) | Automatic generation of 'as-run' results in a three dimensional model using augmented reality | |
CN114662533A (en) | Hand model generation method and device, computer equipment and medium | |
CN114770507B (en) | Robot control method, control device and rehabilitation robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201211 |
|
RJ01 | Rejection of invention patent application after publication |