Nothing Special   »   [go: up one dir, main page]

CN113576527A - Method for judging ultrasonic input by using voice control - Google Patents

Method for judging ultrasonic input by using voice control Download PDF

Info

Publication number
CN113576527A
CN113576527A CN202110994457.9A CN202110994457A CN113576527A CN 113576527 A CN113576527 A CN 113576527A CN 202110994457 A CN202110994457 A CN 202110994457A CN 113576527 A CN113576527 A CN 113576527A
Authority
CN
China
Prior art keywords
sound
input
voice
fingerprint
useful signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110994457.9A
Other languages
Chinese (zh)
Inventor
余锦华
汪源源
邓寅晖
童宇宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202110994457.9A priority Critical patent/CN113576527A/en
Publication of CN113576527A publication Critical patent/CN113576527A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention provides a method for judging ultrasonic input by using voice control, which comprises the following steps: s1, an operator holds a plurality of microphone devices on the ultrasonic probe to receive sound input in real time to obtain a plurality of voice signals, then whether the sound input exists is judged, when the sound input does not exist, the sound input is received again by the microphone, and when the sound input does not exist, the next step is carried out; s2, carrying out ICA analysis on a plurality of voice signals to extract useful signals; s3, positioning the sound source for the extracted useful signal, judging the position of the sound source, returning to S1 when judging that the useful signal is in the preset range, and entering the next step when judging that the useful signal is not in the preset range; s4, judging the sound fingerprint of the useful signal, returning to S1 when the sound fingerprint is judged not to be the sound fingerprint of the operator, and entering the next step when the sound fingerprint is judged to be the sound fingerprint of the operator; and S5, recognizing the voice content of the useful signal and finishing the ultrasonic judgment operation required in the voice content.

Description

Method for judging ultrasonic input by using voice control
Technical Field
The invention relates to an ultrasonic input judgment method, in particular to a method for judging ultrasonic input by using voice control.
Background
Ultrasound imaging is currently the most common non-invasive method for medical image determination. Ultrasound imaging plays a very important role in an increasing number of medical scenarios due to the relatively low price of ultrasound equipment (compared to magnetic resonance, CT) and the ease of operation. Generally, an ultrasound apparatus is a large-scale desktop apparatus. The ultrasonic probe generally comprises two parts, namely a main machine and an ultrasonic probe. On the host computer, the user often needs frequent operation keyboard and input, and the scanning of the ultrasonic probe is matched at the same time, and then the required ultrasonic judgment is completed.
With the increasing popularity of ultrasound devices and the development of technology, hand-held ultrasound is slowly applied to many medical applications due to its more convenient volume. Which generally consists of an ultrasonic probe and an intelligent device connected with the ultrasonic probe. When an operator uses the handheld ultrasound, one hand is required to hold the ultrasound probe, and the other hand is required to hold the smart device. But this often prevents very complex ultrasound operations. Some basic judgment operations sometimes cannot be conveniently completed on an operation panel like using a traditional mainframe.
In order to increase the operability of the handheld device, some ultrasound manufacturers may set keys on the probe. Although the number of keys capable of inputting is increased, the influence of the keys on the ultrasonic probe cannot be avoided. Ultrasound imaging is often performed on the order of hundreds of microns. Slight key shaking often causes great deviation of ultrasonic image data, thereby affecting judgment and performance of clinical application algorithms based on the ultrasonic data. Furthermore, when using a handheld device, both hands of the operator are already occupied by the smart device and the ultrasound probe. Therefore, other input modes need to be introduced to meet the input of the operation of the ultrasound equipment in use. For ultrasonic diagnosis, it is generally performed in a relatively dark environment, and thus the method using optical recognition is not suitable. There is a need to find a clinically practical way to increase the operational input of an ultrasound handpiece without creating a potential disturbance.
Disclosure of Invention
The present invention is made to solve the above problems, and an object of the present invention is to provide a method for performing an ultrasonic input judgment by using a voice control.
The invention provides a method for judging ultrasonic input by using voice control, which is characterized by comprising the following steps: step S1, an operator holds a plurality of microphone devices on the ultrasonic probe to receive sound input in real time to obtain a plurality of voice signals, then whether the sound input exists is judged, when the sound input does not exist, the sound input is received again by the microphone, and when the sound input exists, the next step is carried out; step S2, ICA analysis is carried out on a plurality of voice signals, and useful signals are extracted; step S3, positioning the sound source for the extracted useful signal, judging the position of the sound source, returning to step S1 if the position is judged to be in the preset range, and going to the next step if the position is judged not to be in the preset range; step S4, the sound fingerprint of the useful signal in the preset range is judged, when the sound fingerprint is judged not to be the sound fingerprint of the operator, the step S1 is returned, when the sound fingerprint is judged to be the sound fingerprint of the operator, the next step is proceeded; step S5, recognizing the voice content of the useful signal which is the voice fingerprint of the operator, and completing the ultrasonic judgment operation required in the voice content.
The method for judging ultrasonic input by using voice control provided by the invention can also have the following characteristics: in step S1, the number of microphones is equal to or greater than 3.
The method for judging ultrasonic input by using voice control provided by the invention can also have the following characteristics: in step S2, the useful signal is gaussian distributed, the useful signal is a sound signal, and the non-useful signal is a background noise signal.
The method for judging ultrasonic input by using voice control provided by the invention can also have the following characteristics: in step S3, the positioning in the three-dimensional space is performed by the time difference obtained by the different sound input signals obtained by comparing the different sounds in the useful signal, and the predetermined range is a range less than 2 meters from the operator and an angle with the operator is 60 ° or less.
The method for judging ultrasonic input by using voice control provided by the invention can also have the following characteristics: in step S4, the voice fingerprint is determined using the learning algorithm.
The method for judging ultrasonic input by using voice control provided by the invention can also have the following characteristics: in step S5, the speech content is recognized by a speech recognition algorithm.
Action and Effect of the invention
According to the method for judging the ultrasonic input by using the voice control, the method for judging the ultrasonic input by using the voice control is adopted, so that the influence of other background sound noises can be avoided; because it is confirmed that the operator uttered the correct voice input, it is possible to make this correct voice input not affect other possibly nearby handheld ultrasound devices and also to overcome the interference of other sounds.
Therefore, the method for judging ultrasonic input by using voice control is convenient to operate and use, improves the working efficiency, accords with a clinical actual mode, and can avoid disturbance. In addition, by adopting the method of the embodiment, a user can define various sound control instructions which do not need two hands to carry out, and the related instructions can be correctly executed under the actual condition, thereby effectively helping the use of the ultrasonic handheld device.
Drawings
FIG. 1 is a flow chart of a method for determining ultrasonic input using voice control in an embodiment of the present invention.
Detailed Description
In order to make the technical means and functions of the present invention easy to understand, the present invention is specifically described below with reference to the embodiments and the accompanying drawings.
Example (b):
as shown in fig. 1, the present invention provides a method for determining ultrasonic input by using voice control, comprising the following steps:
step S1, the operator holds a plurality of microphone devices on the ultrasonic probe to receive sound input in real time to obtain a plurality of voice signals, then judges whether sound input exists, if no sound input exists, the microphone is adopted to receive the sound again, and if sound input exists, the next step is carried out.
In this embodiment, the number of the microphones is greater than or equal to 3, which may be implemented by using an external device, and in addition, when the amplitude of the voice signal reaches a certain threshold, it is determined that the voice signal is input. The threshold value is set according to specific conditions.
In step S2, ICA analysis is performed on the plurality of speech signals to extract useful signals.
Specifically, the present embodiment simultaneously adopts a plurality of sound input devices, and each microphone device can independently obtain input data of sound, so that a plurality of real-time sound data can be simultaneously obtained, based on such data, an independent component analysis algorithm, i.e., an ICA algorithm, is adopted, and meanwhile, based on the condition that a useful signal is gaussian distributed and a non-useful signal is non-gaussian distributed, a sound signal and a background noise signal are separated, and subsequent processing is performed based on the sound signal only, so that the problem of background noise can be solved
In this embodiment, the useful signal is gaussian distributed, the useful signal is a sound signal, and the non-useful signal is a background noise signal.
Step S3 is to locate the sound source for the extracted useful signal, to determine the position of the sound source, and if it is determined that the position is within the preset range, the process returns to step S1, and if it is determined that the position is not within the preset range, the process proceeds to the next step.
Specifically, the invention collects the sound signals based on a plurality of sound input devices, so that the direction of the effective sound signals can be determined, for example, when the smart phone is always opposite to the operator, the source of the effective sound is only in the direction opposite to the smart phone, and the subsequent processing is only directed at the sound signals transmitted in the direction, so that the influence of correct voice input when other surrounding devices are used is avoided;
in step S3, the positioning in the three-dimensional space is performed by the time difference obtained by the different sound input signals obtained by comparing the different sounds in the useful signal, and the predetermined range is a range less than 2 meters from the operator and an angle with the operator is 60 ° or less.
In step S4, the audio fingerprint of the desired signal within the predetermined range is determined, and if the audio fingerprint is determined not to be the audio fingerprint of the operator, the process returns to step S1, and if the audio fingerprint is determined to be the audio fingerprint of the operator, the process proceeds to the next step.
In this embodiment, a machine learning algorithm (such as a neural network, a sparse representation, and other algorithms) is used to determine the sound fingerprint, which is specific to the time sequence signal, and may be implemented by a machine learning algorithm such as deep learning. Specifically, for the voices of multiple operators, an algorithm needs to learn, distinguish and find an acquisition mode of voice features, and such an acquisition mode is automatically established in the algorithm learning and distinguishing process and does not need manual intervention. And then, for each possible operator, acquiring the characteristics of the voice by using the learned voice characteristic acquisition mode and recording the characteristics. When the device is used, the input and the characteristics of the sound signal are discriminated.
Step S5, recognizing the voice content of the useful signal which is the voice fingerprint of the operator, and completing the ultrasonic judgment operation required in the voice content.
In this embodiment, a voice recognition algorithm is used to recognize the voice content, for example: natural language processing, time domain signal neural network and other algorithms.
Effects and effects of the embodiments
In the method for judging ultrasonic input by using voice control according to the embodiment, because the method for judging ultrasonic input by using voice control is adopted, the influence of other background sound noise can be avoided; because it is confirmed that the operator uttered the correct voice input, it is possible to make this correct voice input not affect other possibly nearby handheld ultrasound devices and also to overcome the interference of other sounds.
Therefore, the method for judging ultrasonic input by using voice control is convenient to operate and use, improves the working efficiency, accords with the clinical practical mode, and can avoid the generation of disturbance. In addition, by adopting the method of the embodiment, a user can define various sound control instructions which do not need two hands to carry out, and the related instructions can be correctly executed under the actual condition, thereby effectively helping the use of the ultrasonic handheld device.
The above embodiments are preferred examples of the present invention, and are not intended to limit the scope of the present invention.

Claims (6)

1. A method for judging ultrasonic input by using voice control is characterized by comprising the following steps:
step S1, an operator holds a plurality of microphone devices on the ultrasonic probe to receive sound input in real time to obtain a plurality of voice signals, then whether the sound input exists is judged, when the sound input does not exist, the sound input is received again by the microphone, and when the sound input exists, the next step is carried out;
step S2, ICA analysis is carried out on the voice signals, and useful signals are extracted;
step S3, locating the sound source for the extracted useful signal, determining the position of the sound source, returning to step S1 if the position is determined to be within the preset range, and proceeding to the next step if the position is determined not to be within the preset range;
step S4, judging the sound fingerprint of the useful signal in the preset range, returning to step S1 when judging that the sound fingerprint is not the sound fingerprint of the operator, and going to the next step when judging that the sound fingerprint is the sound fingerprint of the operator;
and step S5, recognizing the voice content of the useful signal which is the voice fingerprint of the operator, and finishing the ultrasonic judgment operation required in the voice content.
2. The method for determining ultrasonic input using voice control of claim 1, wherein:
in step S1, the number of microphones is equal to or greater than 3.
3. The method for determining ultrasonic input using voice control of claim 1, wherein:
in step S2, the useful signal is gaussian distributed, the useful signal is a sound signal, and the non-useful signal is a background noise signal.
4. The method for determining ultrasonic input using voice control of claim 1, wherein:
wherein, in the step S3, the positioning in the three-dimensional space is performed by the time difference obtained by the different sound input signals obtained by comparing the different sounds in the useful signal,
the preset range is that the range from the operator is less than 2 meters, and the angle between the operator and the preset range is less than or equal to 60 degrees.
5. The method for determining ultrasonic input using voice control of claim 1, wherein:
in step S4, the voice fingerprint is learned and determined by using a machine learning algorithm.
6. The method for determining ultrasonic input using voice control of claim 1, wherein:
in step S5, the speech content is recognized by using a voice recognition algorithm.
CN202110994457.9A 2021-08-27 2021-08-27 Method for judging ultrasonic input by using voice control Pending CN113576527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110994457.9A CN113576527A (en) 2021-08-27 2021-08-27 Method for judging ultrasonic input by using voice control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110994457.9A CN113576527A (en) 2021-08-27 2021-08-27 Method for judging ultrasonic input by using voice control

Publications (1)

Publication Number Publication Date
CN113576527A true CN113576527A (en) 2021-11-02

Family

ID=78240120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110994457.9A Pending CN113576527A (en) 2021-08-27 2021-08-27 Method for judging ultrasonic input by using voice control

Country Status (1)

Country Link
CN (1) CN113576527A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544654A (en) * 1995-06-06 1996-08-13 Acuson Corporation Voice control of a medical ultrasound scanning machine
US20030139664A1 (en) * 2002-01-17 2003-07-24 Siemens Medical Solutions Usa, Inc. Segmented handheld medical ultrasound system and method
WO2005048239A1 (en) * 2003-11-12 2005-05-26 Honda Motor Co., Ltd. Speech recognition device
US20070100615A1 (en) * 2003-09-17 2007-05-03 Hiromu Gotanda Method for recovering target speech based on amplitude distributions of separated signals
CN101427154A (en) * 2005-09-21 2009-05-06 皇家飞利浦电子股份有限公司 Ultrasound imaging system with voice activated controls using remotely positioned microphone
US20110029309A1 (en) * 2008-03-11 2011-02-03 Toyota Jidosha Kabushiki Kaisha Signal separating apparatus and signal separating method
US20150065881A1 (en) * 2013-08-29 2015-03-05 Samsung Medison Co., Ltd. Ultrasound diagnostic apparatus and method of operating the same
US20150327841A1 (en) * 2014-05-13 2015-11-19 Kabushiki Kaisha Toshiba Tracking in ultrasound for imaging and user interface
US20170071573A1 (en) * 2014-05-12 2017-03-16 Toshiba Medical Systems Corporation Ultrasound diagnostic apparatus and control method thereof
CN107862060A (en) * 2017-11-15 2018-03-30 吉林大学 A kind of semantic recognition device for following the trail of target person and recognition methods
US20180350370A1 (en) * 2017-06-01 2018-12-06 Kabushiki Kaisha Toshiba Voice processing device, voice processing method, and computer program product
US20190214011A1 (en) * 2016-10-14 2019-07-11 Samsung Electronics Co., Ltd. Electronic device and method for processing audio signal by electronic device
CN110767226A (en) * 2019-10-30 2020-02-07 山西见声科技有限公司 Sound source positioning method and device with high accuracy, voice recognition method and system, storage equipment and terminal
CN111640433A (en) * 2020-06-01 2020-09-08 珠海格力电器股份有限公司 Voice interaction method, storage medium, electronic equipment and intelligent home system
CN111839585A (en) * 2020-07-10 2020-10-30 哈尔滨理工大学 Ultrasonic probe voice control method and system for prostate intervention treatment
US20210041558A1 (en) * 2018-03-05 2021-02-11 Exo Imaging, Inc. Thumb-dominant ultrasound imaging system

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544654A (en) * 1995-06-06 1996-08-13 Acuson Corporation Voice control of a medical ultrasound scanning machine
US20030139664A1 (en) * 2002-01-17 2003-07-24 Siemens Medical Solutions Usa, Inc. Segmented handheld medical ultrasound system and method
US20070100615A1 (en) * 2003-09-17 2007-05-03 Hiromu Gotanda Method for recovering target speech based on amplitude distributions of separated signals
WO2005048239A1 (en) * 2003-11-12 2005-05-26 Honda Motor Co., Ltd. Speech recognition device
CN101427154A (en) * 2005-09-21 2009-05-06 皇家飞利浦电子股份有限公司 Ultrasound imaging system with voice activated controls using remotely positioned microphone
US20110029309A1 (en) * 2008-03-11 2011-02-03 Toyota Jidosha Kabushiki Kaisha Signal separating apparatus and signal separating method
US20150065881A1 (en) * 2013-08-29 2015-03-05 Samsung Medison Co., Ltd. Ultrasound diagnostic apparatus and method of operating the same
US20170071573A1 (en) * 2014-05-12 2017-03-16 Toshiba Medical Systems Corporation Ultrasound diagnostic apparatus and control method thereof
US20150327841A1 (en) * 2014-05-13 2015-11-19 Kabushiki Kaisha Toshiba Tracking in ultrasound for imaging and user interface
US20190214011A1 (en) * 2016-10-14 2019-07-11 Samsung Electronics Co., Ltd. Electronic device and method for processing audio signal by electronic device
US20180350370A1 (en) * 2017-06-01 2018-12-06 Kabushiki Kaisha Toshiba Voice processing device, voice processing method, and computer program product
CN107862060A (en) * 2017-11-15 2018-03-30 吉林大学 A kind of semantic recognition device for following the trail of target person and recognition methods
US20210041558A1 (en) * 2018-03-05 2021-02-11 Exo Imaging, Inc. Thumb-dominant ultrasound imaging system
CN110767226A (en) * 2019-10-30 2020-02-07 山西见声科技有限公司 Sound source positioning method and device with high accuracy, voice recognition method and system, storage equipment and terminal
CN111640433A (en) * 2020-06-01 2020-09-08 珠海格力电器股份有限公司 Voice interaction method, storage medium, electronic equipment and intelligent home system
CN111839585A (en) * 2020-07-10 2020-10-30 哈尔滨理工大学 Ultrasonic probe voice control method and system for prostate intervention treatment

Similar Documents

Publication Publication Date Title
CN107799126B (en) Voice endpoint detection method and device based on supervised machine learning
Kim et al. Audio classification based on MPEG-7 spectral basis representations
US8473099B2 (en) Information processing system, method of processing information, and program for processing information
CN110503969A (en) A kind of audio data processing method, device and storage medium
CN110600017A (en) Training method of voice processing model, voice recognition method, system and device
CN112949708B (en) Emotion recognition method, emotion recognition device, computer equipment and storage medium
CN110780741B (en) Model training method, application running method, device, medium and electronic equipment
CN104361896B (en) Voice quality assessment equipment, method and system
Wang et al. Dynamic speed warping: Similarity-based one-shot learning for device-free gesture signals
WO2023141565A1 (en) Data augmentation system and method for multi-microphone systems
CN112562723A (en) Pronunciation accuracy determination method and device, storage medium and electronic equipment
CN117116290A (en) Method and related equipment for positioning defects of numerical control machine tool parts based on multidimensional characteristics
CN116868265A (en) System and method for data enhancement and speech processing in dynamic acoustic environments
Savchenko et al. Scale-invariant modification of COSH distance for measuring speech signal distortions in real-time mode
CN113576527A (en) Method for judging ultrasonic input by using voice control
Wu et al. DMHC: Device-free multi-modal handwritten character recognition system with acoustic signal
CN116561533A (en) Emotion evolution method and terminal for virtual avatar in educational element universe
CN106098080A (en) Method and device for determining speech recognition threshold in noise environment
Ahmed et al. Detecting Replay Attack on Voice-Controlled Systems using Small Neural Networks
Dov et al. Multimodal kernel method for activity detection of sound sources
WO2022178162A1 (en) System and method for data augmentation and speech processing in dynamic acoustic environments
CN114220177A (en) Lip syllable recognition method, device, equipment and medium
Nath et al. Separation of Overlapping Audio Signals: A Review on Current Trends and Evolving Approaches
bin Sham et al. Voice Pathology Detection System Using Machine Learning Based on Internet of Things
US11367421B2 (en) Autonomous tuner for stringed instruments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211102

WD01 Invention patent application deemed withdrawn after publication