CN109887077B - Method and apparatus for generating three-dimensional model - Google Patents
Method and apparatus for generating three-dimensional model Download PDFInfo
- Publication number
- CN109887077B CN109887077B CN201910171928.9A CN201910171928A CN109887077B CN 109887077 B CN109887077 B CN 109887077B CN 201910171928 A CN201910171928 A CN 201910171928A CN 109887077 B CN109887077 B CN 109887077B
- Authority
- CN
- China
- Prior art keywords
- modeled
- skeleton
- model
- medical image
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000002159 abnormal effect Effects 0.000 claims description 113
- 230000005856 abnormality Effects 0.000 claims description 45
- 238000000605 extraction Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 230000006872 improvement Effects 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000000875 corresponding effect Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 9
- 241001465754 Metazoa Species 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 5
- 206010034464 Periarthritis Diseases 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000002746 orthostatic effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010023204 Joint dislocation Diseases 0.000 description 1
- 208000008877 Shoulder Dislocation Diseases 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 210000000078 claw Anatomy 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 210000000003 hoof Anatomy 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The embodiment of the application discloses a method and a device for generating a three-dimensional model. One embodiment of the method comprises: acquiring a medical image to be modeled, which is obtained by shooting an object to be modeled; inputting the medical image to be modeled into a pre-trained framework parameter generation model to obtain framework parameters of an object to be modeled, wherein the framework parameter generation model is used for generating the framework parameters of the object in the medical image; and adjusting the standard three-dimensional framework model based on the framework parameters of the object to be modeled to obtain the three-dimensional framework model of the object to be modeled. According to the embodiment, the skeleton parameters are generated based on the skeleton parameter generation model, the standard three-dimensional skeleton parameter model is adjusted based on the skeleton parameters, and the three-dimensional skeleton model can be obtained quickly. And moreover, a three-dimensional skeleton model is generated based on the medical image for three-dimensional display, so that the three-dimensional skeleton model is more visual and is convenient to understand.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for generating a three-dimensional model.
Background
Gray filter is used for changing a picture into a grey-scale image; the Invert filter is used for turning all visual attributes of the object, including color, saturation and brightness value; the Xray filter is a filter that allows the object to reflect its contours and highlight those contours, a so-called X-ray film.
Currently, X-ray film examination is the most common medical imaging examination method. The X-ray of the patient can reflect the illness state of the patient. However, the X-ray film needs to be understood by a person with professional medical knowledge (such as a doctor), and most patients have poor three-dimensional perception capability and cannot determine the condition of the patient by directly observing the X-ray film. Therefore, a doctor is required to observe the X-ray of the patient and explain the condition to the patient.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a three-dimensional model.
In a first aspect, an embodiment of the present application provides a method for generating a three-dimensional model, including: acquiring a medical image to be modeled, which is obtained by shooting an object to be modeled; inputting the medical image to be modeled into a pre-trained framework parameter generation model to obtain framework parameters of an object to be modeled, wherein the framework parameter generation model is used for generating the framework parameters of the object in the medical image; and adjusting the standard three-dimensional framework model based on the framework parameters of the object to be modeled to obtain the three-dimensional framework model of the object to be modeled.
In some embodiments, the skeletal parameter generation model includes a feature extraction network and a fitting network.
In some embodiments, inputting the medical image to be modeled into a pre-trained skeleton parameter generation model to obtain skeleton parameters of the object to be modeled, including: inputting the medical image to be modeled into a feature extraction network to obtain the skeleton feature of the object to be modeled; and inputting the skeleton characteristics of the object to be modeled into the fitting network to obtain the skeleton parameters of the object to be modeled.
In some embodiments, the method further comprises: analyzing the medical image to be modeled, and determining the position of the abnormal part of the object to be modeled in the medical image to be modeled; performing projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled; and marking the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In some embodiments, analyzing the medical image to be modeled to determine the position of the abnormal part of the object to be modeled in the medical image to be modeled includes: and inputting the medical image to be modeled into a classification network trained in advance to obtain the position of the abnormal part of the object to be modeled in the medical image to be modeled and the abnormal category of the abnormal part.
In some embodiments, after labeling the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled, the method further includes: inquiring abnormal relevant information corresponding to the abnormal type of the abnormal part from a pre-stored abnormal relevant information set, wherein the abnormal relevant information in the abnormal relevant information set comprises introduction information and improvement information of the abnormal type; and associating the abnormality related information corresponding to the abnormality category of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In some embodiments, the skeleton parameter generation model is trained by: acquiring a training sample, wherein the training sample comprises a sample medical image and corresponding sample skeleton parameters; and taking the sample medical image as input, taking the sample skeleton parameter as output, and training to obtain a skeleton parameter generation model.
In a second aspect, an embodiment of the present application provides an apparatus for generating a three-dimensional model, including: the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire a medical image to be modeled obtained by shooting an object to be modeled; the generating unit is configured to input the medical image to be modeled into a pre-trained skeleton parameter generating model to obtain skeleton parameters of the object to be modeled, wherein the skeleton parameter generating model is used for generating skeleton parameters of the object in the medical image; and the adjusting unit is configured to adjust the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
In some embodiments, the skeletal parameter generation model includes a feature extraction network and a fitting network.
In some embodiments, the generating unit comprises: the extraction subunit is configured to input the medical image to be modeled into the feature extraction network to obtain the skeleton feature of the object to be modeled; and the fitting subunit is configured to input the skeleton characteristics of the object to be modeled into the fitting network to obtain skeleton parameters of the object to be modeled.
In some embodiments, the apparatus further comprises: the determining unit is configured to analyze the medical image to be modeled and determine the position of the abnormal part of the object to be modeled in the medical image to be modeled; the transformation unit is configured to perform projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled; and the marking unit is configured to mark the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In some embodiments, the determining unit is further configured to: and inputting the medical image to be modeled into a classification network trained in advance to obtain the position of the abnormal part of the object to be modeled in the medical image to be modeled and the abnormal category of the abnormal part.
In some embodiments, the apparatus further comprises: the device comprises an inquiring unit, a judging unit and a judging unit, wherein the inquiring unit is configured to inquire abnormality related information corresponding to the abnormality type of an abnormal part from a prestored abnormality related information set, and the abnormality related information in the abnormality related information set comprises introduction information and improvement information of the abnormality type; and the association unit is configured to associate the abnormality related information corresponding to the abnormality type of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In some embodiments, the skeleton parameter generation model is obtained by training: acquiring a training sample, wherein the training sample comprises a sample medical image and corresponding sample skeleton parameters; and taking the sample medical image as input, taking the sample skeleton parameter as output, and training to obtain a skeleton parameter generation model.
In a third aspect, an embodiment of the present application provides a server, where the server includes: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for generating the three-dimensional model, after the medical image to be modeled, which is obtained by shooting the object to be modeled, is obtained, the medical image to be modeled is input into the skeleton parameter generation model, so that skeleton parameters of the object to be modeled are obtained; and then, adjusting the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled. The framework parameters are generated based on the framework parameter generation model, the standard three-dimensional framework parameter model is adjusted based on the framework parameters, and the three-dimensional framework model can be obtained quickly. And moreover, a three-dimensional skeleton model is generated based on the medical image for three-dimensional display, so that the three-dimensional skeleton model is more visual and is convenient to understand.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating a three-dimensional model according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of a method for generating a three-dimensional model according to the present application;
FIG. 4 is a schematic illustration of an application scenario of the method for generating a three-dimensional model shown in FIG. 3;
FIG. 5 is a schematic diagram illustrating the structure of one embodiment of an apparatus for generating a three-dimensional model according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use to implement a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for generating a three-dimensional model or the apparatus for generating a three-dimensional model of the present application may be applied.
As shown in fig. 1, a system architecture 100 may include a terminal device 101, a network 102, and a server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or send messages and the like. Various client software, such as image processing software and the like, may be installed on the terminal apparatus 101.
The terminal apparatus 101 may be hardware or software. When the terminal device 101 is hardware, it may be various electronic devices having a display screen and supporting three-dimensional model presentation. Including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal device 101 is software, it can be installed in the electronic device described above. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may be a server that provides various services. Such as an image processing server. The image processing server may perform processing such as analysis on the acquired data of the medical image to be modeled, generate a processing result (for example, a three-dimensional skeleton model of the object to be modeled), and push the processing result to the terminal device 101.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for generating a three-dimensional model provided in the embodiment of the present application is generally executed by the server 103, and accordingly, the apparatus for generating a three-dimensional model is generally disposed in the server 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating a three-dimensional model according to the present application is shown. The method for generating a three-dimensional model comprises the following steps:
In the present embodiment, an execution subject (for example, the server 103 shown in fig. 1) of the method for generating a three-dimensional model may acquire a medical image to be modeled obtained by shooting an object to be modeled. Generally, a medical image capturing apparatus may capture an object to be modeled to obtain a medical image to be modeled. The executing subject may acquire the medical image to be created from the medical image capturing device or a terminal device (for example, the terminal device 101 shown in fig. 1) storing the medical image to be created captured by the medical image capturing device. The object to be modeled may include, but is not limited to, a part of a human, a part of an animal, and the like. The human site may include a whole body region of the human or may include a local region of the human. The localized portion of the person may include, but is not limited to, the person's head, chest, legs, feet, shoulders, hands, elbows, and the like. Similarly, the site of the animal may include a whole body site of the animal or may include a local site of the animal. The localized part of the animal may include, but is not limited to, the animal's head, legs, claws, hooves, and the like. The medical image to be modeled may be, for example, an X-ray image, a magnetic resonance image, an ultrasound image, or other two-dimensional images. In addition, the medical image to be modeled may include, but is not limited to, an orthostatic medical image, a lateral medical image, and an oblique medical image of the object to be modeled. In a general case, the medical image to be modeled here is an orthostatic medical image of the object to be modeled.
In this embodiment, the executing subject may input the medical image to be modeled into a pre-trained skeleton parameter generation model to obtain skeleton parameters of the object to be modeled.
Here, the skeleton parameter generation model may be used to generate skeleton parameters of an object in a medical image. The skeletal parameters may be stereo description data of the part of the object to be modeled, including but not limited to length, width, and height data of the part of the object to be modeled. For example, where the object to be modeled is a whole body part of a human, the skeletal parameters may include, but are not limited to, head height, width, and height data, chest height, leg height, foot height, shoulder height, hand height, elbow height, and so forth.
In some optional implementation manners of this embodiment, the executing body may collect in advance a large number of medical images and skeleton parameters of objects of the same category as the object to be created, and correspondingly store the medical images and skeleton parameters to generate a corresponding relationship table as a skeleton parameter generation model. After the medical image to be modeled is acquired, the executing body may first calculate the similarity between the medical image to be modeled and each medical image in the corresponding relationship table; and then, based on the calculated similarity, finding out the skeleton parameters of the object to be modeled from the corresponding relation table. For example, the executing body may find out the skeleton parameter of the object with the highest similarity to the medical image to be modeled from the correspondence table, as the skeleton parameter of the object to be modeled.
In some optional implementations of the present embodiment, the skeleton parameter generation model may be obtained by performing supervised training on an existing machine learning model (e.g., various artificial neural networks, etc.) by using various machine learning methods and training samples. Specifically, the executing agent may train the skeleton parameter generation model by:
first, training samples are obtained.
Here, each training sample may include a sample medical image and corresponding sample skeleton parameters. The sample medical image is a medical image obtained by shooting a sample object. The sample skeleton parameters are stereo description data of the part of the sample object.
Then, the medical image of the sample is used as input, the skeleton parameter of the sample is used as output, and a skeleton parameter generation model is obtained through training.
Here, the execution subject may input the sample medical image from an input side of the initial skeleton parameter generation model, and output skeleton parameters of the sample object in the sample medical image from an output side through processing of the initial skeleton parameter generation model. Subsequently, the execution subject may calculate the generation accuracy of the initial skeleton parameter generation model based on the skeleton parameters of the sample object and the sample skeleton parameters. And if the generating accuracy does not meet the preset constraint condition, adjusting parameters of the initial skeleton parameter generating model, and then inputting the sample medical image to continue model training. If the generating accuracy meets the preset constraint condition, the model training is completed, and the initial skeleton parameter generating model at the moment is the skeleton parameter generating model. The initial skeleton parameter generation model may be various parameter generation models of the initialization parameters, such as a model formed by combining a feature extraction network and a fitting network. The initialization parameter may be some different small random number.
In some optional implementations of this embodiment, the skeletal parameter generation model may include a feature extraction network and a fitting network. At this time, the execution subject may input the medical image to be modeled to the feature extraction network to obtain the skeleton feature of the object to be modeled; and then inputting the skeleton characteristics of the object to be modeled into the fitting network to obtain the skeleton parameters of the object to be modeled. The feature extraction network may be, for example, a VGG16 model, and is used to extract skeleton features of an object to be modeled. Skeletal features may be information describing the skeleton of the object to be modeled, including, but not limited to, various primitives associated with the skeleton (e.g., skeleton actions, skeleton contours, skeleton positions, skeleton textures, etc.). In general, skeletal features may be represented by multidimensional vectors. The fitting network may be comprised of a plurality of convolutional layers and a plurality of fully-connected layers for fitting the skeletal parameters.
And 203, adjusting the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
In this embodiment, the executing body may adjust the standard three-dimensional skeleton model based on skeleton parameters of the object to be modeled, so as to obtain a three-dimensional skeleton model of the object to be modeled. The standard three-dimensional skeleton model can be a three-dimensional skeleton model obtained by fusing three-dimensional skeleton models of a large number of objects of the same category as the object to be modeled. Here, the executing body may adjust the standard three-dimensional skeleton model based on skeleton parameters of the object to be modeled by using, for example, a Morph machine learning algorithm, so as to obtain the three-dimensional skeleton model of the object to be modeled.
In some optional implementation manners of this embodiment, in a case that the object to be modeled is a part of a human, the executing body may further send the three-dimensional skeleton model of the object to be modeled to the terminal device of the object to be modeled. The terminal equipment can carry out three-dimensional display on the three-dimensional skeleton model of the object to be modeled so as to be viewed by the object to be modeled.
In some optional implementations of the embodiment, in a case that the object to be modeled is a part of an animal, the executing body may further send the three-dimensional skeleton model of the object to be modeled to a terminal device of a host of the object to be modeled. The terminal equipment can carry out three-dimensional display on the three-dimensional skeleton model of the object to be modeled so as to be viewed by a host of the object to be modeled.
According to the method for generating the three-dimensional model, after the medical image to be modeled, which is obtained by shooting the object to be modeled, is obtained, the medical image to be modeled is input into the skeleton parameter generation model to obtain the skeleton parameters of the object to be modeled; and then, adjusting the standard three-dimensional framework model based on the framework parameters of the object to be modeled to obtain the three-dimensional framework model of the object to be modeled. The framework parameters are generated based on the framework parameter generation model, the standard three-dimensional framework parameter model is adjusted based on the framework parameters, and the three-dimensional framework model can be obtained quickly. And moreover, a three-dimensional skeleton model is generated based on the medical image for three-dimensional display, so that the three-dimensional skeleton model is more visual and is convenient to understand.
With further reference to FIG. 3, a flow 300 of yet another embodiment of a method for generating a three-dimensional model according to the present application is shown. The method for generating a three-dimensional model comprises the following steps:
And 303, adjusting the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
In the present embodiment, the specific operations of steps 301-.
And step 304, analyzing the medical image to be modeled, and determining the position of the abnormal part of the object to be modeled in the medical image to be modeled.
In the present embodiment, an execution subject (e.g., the server 103 shown in fig. 1) of the method for generating a three-dimensional model may analyze the medical image to be modeled to determine the position of an abnormal part of the object to be modeled in the medical image to be modeled. Generally, the executing subject may analyze skeleton parameters of each part of the object to be modeled in the medical image to be modeled, and if the skeleton parameters of a certain part are abnormal, the certain part is an abnormal part. The abnormal site may be a site where a lesion occurs.
In some optional implementation manners of this embodiment, the executing entity may input the medical image to be modeled to a classification network trained in advance, so as to obtain a position of an abnormal portion of the object to be modeled in the medical image to be modeled and an abnormal category of the abnormal portion. Wherein the classification network may be used to identify the abnormal portion. In general, a classification network may be obtained by supervised training of an existing machine learning model (e.g., various artificial neural networks, etc.) using various machine learning methods and training samples. For example, a classification network may consist of three convolutional layers and two fully-connected layers. The number of the characteristic channels of the three convolutional layers is 32, 64 and 128 from front to back. The feature map resolution is 64, 32, 16 in order from front to back. The first fully-connected layer may output a 256-dimensional vector, and the second fully-connected layer may output a vector of the number of anomaly classes of the anomaly location plus one (plus one to increase the confidence nodes without anomalies). The abnormality type may be a type in which a lesion occurs in a site. For example, for a shoulder, the abnormal category may include, but is not limited to, scapulohumeral periarthritis, shoulder dislocation, and the like.
Generally, for each part of an object to be modeled, a sub-graph of the part needs to be extracted from a medical image to be modeled and input to a classification network for anomaly identification. For example, if it is desired to determine whether the shoulder is abnormal, a square image of the shoulder is cut out from the medical image to be modeled, scaled to a predetermined size (e.g., 128 × 128 resolution), and input to the classification network, so as to output the probability of various lesions occurring in the shoulder.
And 305, performing projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In this embodiment, the executing body may perform projective transformation on the position of the abnormal portion in the medical image to be modeled, so as to obtain the position of the abnormal portion in the three-dimensional skeleton model of the object to be modeled. The projection transformation can transform the coordinates of the abnormal part in the medical image to be modeled of the object to be modeled into the coordinates in the three-dimensional skeleton model of the object to be modeled.
And step 306, marking the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In this embodiment, the executing body may first find the abnormal portion in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal portion in the three-dimensional skeleton model of the object to be modeled; and then marking the searched abnormal part to distinguish the abnormal part from the normal part. For example, the execution body may mark the abnormal portion with a color different from that of the normal portion.
In step 307, abnormality-related information corresponding to the abnormality type of the abnormal portion is searched from a previously stored abnormality-related information set.
In this embodiment, the execution body may find the abnormality related information corresponding to the abnormality type of the abnormal portion from a pre-stored abnormality related information set. The abnormality related information in the abnormality related information set may include introduction information and improvement information of the abnormality category.
And 308, associating the related improvement information corresponding to the abnormal category of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In this embodiment, the execution subject may associate the abnormality-related information corresponding to the abnormality type of the abnormal portion with the abnormal portion in the three-dimensional skeleton model of the object to be modeled. For example, when a click operation is not performed on an abnormal portion in the three-dimensional skeleton model of the object to be modeled, abnormality related information corresponding to an abnormality type of the abnormal portion is hidden. When a click operation is performed on an abnormal part in the three-dimensional skeleton model of the object to be modeled, abnormality related information corresponding to an abnormal category of the abnormal part is displayed.
With continued reference to FIG. 4, FIG. 4 is a schematic illustration of an application scenario of the method for generating a three-dimensional model shown in FIG. 3. In the application scenario shown in fig. 4, the X-ray film device 410 captures an X-ray film 401 of a patient and sends it to the patient's mobile phone 420. The patient opens the image processing software in the cell phone 420, selects the X-ray film 401, and clicks the upload button. At this point, the patient's cell phone 420 may send the X-ray film 401 to the server 430. After receiving the X-ray film 401, the server 430 may first input the X-ray film 401 into the skeleton parameter generation model 402, resulting in skeleton parameters 403 of the patient; then, adjusting the standard three-dimensional skeleton model 404 based on the skeleton parameters 403 of the patient to obtain a three-dimensional skeleton model 405 of the patient; then analyzing the three-dimensional skeleton model 405 to determine that the patient suffers from scapulohumeral periarthritis; then, projection transformation is carried out on the basis of the position of the shoulder in the X-ray film 401 to obtain the position of the shoulder in the three-dimensional skeleton model 405, and the shoulder in the three-dimensional skeleton model 405 is labeled; then, the disease information and treatment information 407 of the scapulohumeral periarthritis are searched from the shoulder lesion related information set 406 and are associated with the shoulder in the three-dimensional skeleton model 405; the three-dimensional skeletal model 405 is finally sent to the patient's cell phone 420. In this way, the patient can view his or her three-dimensional skeletal model 405 on the cell phone 420. When the patient performs a click operation on the shoulder in the three-dimensional skeleton model 405, disease information and treatment information 407 of scapulohumeral periarthritis may be displayed.
As can be seen from fig. 3, the flow 300 of the method for generating a three-dimensional model in the present embodiment has the additional steps 304-308 compared to the embodiment corresponding to fig. 2. Therefore, the scheme described in the embodiment can mark the abnormal part in the three-dimensional skeleton model of the object to be modeled, so that the abnormal part can be conveniently determined. Meanwhile, corresponding abnormality related information is correlated at the abnormal part, so that the abnormality category of the abnormal part can be conveniently known, and the abnormal part can be improved.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating a three-dimensional model, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for generating a three-dimensional model of the present embodiment may include: an acquisition unit 501, a generation unit 502, and an adjustment unit 503. The acquiring unit 501 is configured to acquire a medical image to be modeled, which is obtained by shooting an object to be modeled; a generating unit 502 configured to input the medical image to be modeled into a pre-trained skeleton parameter generating model, to obtain skeleton parameters of the object to be modeled, where the skeleton parameter generating model is used to generate skeleton parameters of the object in the medical image; the adjusting unit 503 is configured to adjust the standard three-dimensional skeleton model based on skeleton parameters of the object to be modeled, so as to obtain a three-dimensional skeleton model of the object to be modeled.
In the present embodiment, in the apparatus 500 for generating a three-dimensional model: for specific processing of the obtaining unit 501, the generating unit 502, and the adjusting unit 503 and technical effects brought by the processing, reference may be made to relevant descriptions of step 201, step 202, and step 203 in the corresponding embodiment of fig. 2, and details are not repeated here.
In some optional implementations of this embodiment, the skeletal parameter generation model includes a feature extraction network and a fitting network.
In some optional implementations of this embodiment, the generating unit 502 includes: an extraction subunit (not shown in the figure) configured to input the medical image to be modeled into the feature extraction network, so as to obtain a skeleton feature of the object to be modeled; and the fitting subunit (not shown in the figure) is configured to input the skeleton characteristics of the object to be modeled into the fitting network, so as to obtain skeleton parameters of the object to be modeled.
In some optional implementations of the present embodiment, the apparatus 500 for generating a three-dimensional model further includes: a determining unit (not shown in the figure) configured to analyze the medical image to be modeled, and determine a position of an abnormal portion of the object to be modeled in the medical image to be modeled; a transformation unit (not shown in the figure) configured to perform projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled; and the labeling unit (not shown in the figure) is configured to label the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
In some optional implementations of this embodiment, the determining unit is further configured to: and inputting the medical image to be modeled into a classification network trained in advance to obtain the position of the abnormal part of the object to be modeled in the medical image to be modeled and the abnormal category of the abnormal part.
In some optional implementations of the present embodiment, the apparatus 500 for generating a three-dimensional model further includes: an inquiring unit (not shown in the figure) configured to inquire out abnormality related information corresponding to an abnormality type of an abnormal part from a prestored abnormality related information set, wherein the abnormality related information in the abnormality related information set comprises introduction information and improvement information of the abnormality type; and an associating unit (not shown in the figure) configured to associate the abnormality-related information corresponding to the abnormality type of the abnormal portion with the abnormal portion in the three-dimensional skeleton model of the object to be modeled.
In some optional implementations of this embodiment, the skeleton parameter generation model is obtained by training through the following steps: acquiring a training sample, wherein the training sample comprises a sample medical image and corresponding sample skeleton parameters; and taking the sample medical image as input, taking the sample skeleton parameter as output, and training to obtain a skeleton parameter generation model.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use in implementing a server (e.g., server 103 shown in FIG. 1) according to embodiments of the present application is shown. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. A driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a generation unit, and an adjustment unit. The names of the units do not in some cases form a limitation on the units themselves, and for example, the acquiring unit may also be described as a "unit that acquires a medical image to be modeled obtained by photographing an object to be modeled".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the server described in the above embodiments; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: acquiring a medical image to be modeled, which is obtained by shooting an object to be modeled; inputting the medical image to be modeled into a pre-trained framework parameter generation model to obtain framework parameters of an object to be modeled, wherein the framework parameter generation model is used for generating the framework parameters of the object in the medical image; and adjusting the standard three-dimensional framework model based on the framework parameters of the object to be modeled to obtain the three-dimensional framework model of the object to be modeled.
The foregoing description is only exemplary of the preferred embodiments of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (12)
1. A method for generating a three-dimensional model, comprising:
acquiring a medical image to be modeled, which is obtained by shooting an object to be modeled;
inputting the medical image to be modeled into a pre-trained framework parameter generation model to obtain framework parameters of the object to be modeled, wherein the framework parameter generation model is used for generating framework parameters of the object in the medical image, and the framework parameters comprise length, width and height data of the part of the object to be modeled;
adjusting a standard three-dimensional skeleton model based on skeleton parameters of the object to be modeled to obtain a three-dimensional skeleton model of the object to be modeled, wherein the standard three-dimensional skeleton model is a three-dimensional skeleton model obtained by fusing three-dimensional skeleton models of a plurality of objects which are the same as the object to be modeled;
the method for generating the skeleton parameter comprises the following steps of inputting the medical image to be modeled into a pre-trained skeleton parameter generation model to obtain the skeleton parameter of the object to be modeled, wherein the skeleton parameter generation model comprises a feature extraction network and a fitting network, and the method comprises the following steps:
inputting the medical image to be modeled into the feature extraction network to obtain the skeleton feature of the object to be modeled;
and inputting the skeleton characteristics of the object to be modeled into the fitting network to obtain the skeleton parameters of the object to be modeled.
2. The method of claim 1, wherein the method further comprises:
analyzing the medical image to be modeled, and determining the position of the abnormal part of the object to be modeled in the medical image to be modeled;
performing projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled;
and marking the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
3. The method according to claim 2, wherein the analyzing the medical image to be modeled to determine a position of an abnormal part of the object to be modeled in the medical image to be modeled comprises:
and inputting the medical image to be modeled into a pre-trained classification network to obtain the position of the abnormal part of the object to be modeled in the medical image to be modeled and the abnormal category of the abnormal part.
4. The method according to claim 3, wherein after labeling the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled, the method further comprises:
inquiring abnormal relevant information corresponding to the abnormal type of the abnormal part from a pre-stored abnormal relevant information set, wherein the abnormal relevant information in the abnormal relevant information set comprises introduction information and improvement information of the abnormal type;
and associating the abnormality related information corresponding to the abnormality category of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
5. The method of claim 1, wherein the skeletal parameter generation model is trained by:
acquiring a training sample, wherein the training sample comprises a sample medical image and corresponding sample skeleton parameters;
and taking the sample medical image as input, taking the sample skeleton parameter as output, and training to obtain the skeleton parameter generation model.
6. An apparatus for generating a three-dimensional model, comprising:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire a medical image to be modeled obtained by shooting an object to be modeled;
the generation unit is configured to input the medical image to be modeled into a pre-trained skeleton parameter generation model to obtain skeleton parameters of the object to be modeled, wherein the skeleton parameter generation model is used for generating skeleton parameters of the object in the medical image, and the skeleton parameters comprise length, width and height data of the part of the object to be modeled;
the adjusting unit is configured to adjust a standard three-dimensional skeleton model based on skeleton parameters of the object to be modeled to obtain a three-dimensional skeleton model of the object to be modeled, wherein the standard three-dimensional skeleton model is a three-dimensional skeleton model obtained by fusing three-dimensional skeleton models of a plurality of objects of the same type as the object to be modeled;
wherein, the skeleton parameter generation model comprises a feature extraction network and a fitting network, and the generation unit comprises:
the extraction subunit is configured to input the medical image to be modeled into the feature extraction network, so as to obtain the skeleton feature of the object to be modeled;
and the fitting subunit is configured to input the skeleton characteristics of the object to be modeled into the fitting network to obtain skeleton parameters of the object to be modeled.
7. The apparatus of claim 6, wherein the apparatus further comprises:
the determining unit is configured to analyze the medical image to be modeled and determine the position of an abnormal part of the object to be modeled in the medical image to be modeled;
the transformation unit is configured to perform projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled;
the labeling unit is configured to label the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
8. The apparatus of claim 7, wherein the determination unit is further configured to:
and inputting the medical image to be modeled into a pre-trained classification network to obtain the position of the abnormal part of the object to be modeled in the medical image to be modeled and the abnormal category of the abnormal part.
9. The apparatus of claim 8, wherein the apparatus further comprises:
the query unit is configured to query abnormality related information corresponding to the abnormality type of the abnormal part from a prestored abnormality related information set, wherein the abnormality related information in the abnormality related information set comprises introduction information and improvement information of the abnormality type;
an association unit configured to associate abnormality-related information corresponding to an abnormality category of the abnormal part with the abnormal part in a three-dimensional skeleton model of the object to be modeled.
10. The apparatus of claim 6, wherein the skeletal parameter generative model is trained by:
acquiring a training sample, wherein the training sample comprises a sample medical image and corresponding sample skeleton parameters;
and taking the sample medical image as input, taking the sample skeleton parameter as output, and training to obtain the skeleton parameter generation model.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910171928.9A CN109887077B (en) | 2019-03-07 | 2019-03-07 | Method and apparatus for generating three-dimensional model |
PCT/CN2019/113902 WO2020177348A1 (en) | 2019-03-07 | 2019-10-29 | Method and apparatus for generating three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910171928.9A CN109887077B (en) | 2019-03-07 | 2019-03-07 | Method and apparatus for generating three-dimensional model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109887077A CN109887077A (en) | 2019-06-14 |
CN109887077B true CN109887077B (en) | 2022-06-03 |
Family
ID=66931191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910171928.9A Active CN109887077B (en) | 2019-03-07 | 2019-03-07 | Method and apparatus for generating three-dimensional model |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109887077B (en) |
WO (1) | WO2020177348A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109887077B (en) * | 2019-03-07 | 2022-06-03 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating three-dimensional model |
US11576794B2 (en) | 2019-07-02 | 2023-02-14 | Wuhan United Imaging Healthcare Co., Ltd. | Systems and methods for orthosis design |
CN110327146A (en) * | 2019-07-02 | 2019-10-15 | 武汉联影医疗科技有限公司 | A kind of orthoses design method, device and server |
CN111882544B (en) * | 2020-07-30 | 2024-05-14 | 深圳平安智慧医健科技有限公司 | Medical image display method and related device based on artificial intelligence |
CN112734901B (en) * | 2020-12-01 | 2023-08-18 | 深圳市人工智能与机器人研究院 | 3D instruction book generation method and related equipment |
CN113012282B (en) * | 2021-03-31 | 2023-05-19 | 深圳市慧鲤科技有限公司 | Three-dimensional human body reconstruction method, device, equipment and storage medium |
CN117437365B (en) * | 2023-12-20 | 2024-04-12 | 中国科学院深圳先进技术研究院 | Medical three-dimensional model generation method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608737A (en) * | 2016-02-01 | 2016-05-25 | 成都通甲优博科技有限责任公司 | Human foot three-dimensional reconstruction method based on machine learning |
CN107808377A (en) * | 2017-10-31 | 2018-03-16 | 北京青燕祥云科技有限公司 | The localization method and device of focus in a kind of lobe of the lung |
CN107993277A (en) * | 2017-11-28 | 2018-05-04 | 河海大学常州校区 | Damage location artificial skelecton patch formation model method for reconstructing based on priori |
CN108053283A (en) * | 2017-12-15 | 2018-05-18 | 北京中睿华信信息技术有限公司 | A kind of custom made clothing method based on 3D modeling |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4474546B2 (en) * | 2004-10-05 | 2010-06-09 | 国立大学法人東京農工大学 | Face shape modeling system and face shape modeling method |
CN103247073B (en) * | 2013-04-18 | 2016-08-10 | 北京师范大学 | Three-dimensional brain blood vessel model construction method based on tree structure |
US9984311B2 (en) * | 2015-04-11 | 2018-05-29 | Peter Yim | Method and system for image segmentation using a directed graph |
US10169871B2 (en) * | 2016-01-21 | 2019-01-01 | Elekta, Inc. | Systems and methods for segmentation of intra-patient medical images |
CN105963005A (en) * | 2016-04-25 | 2016-09-28 | 华南理工大学 | Method for producing funnel chest correction plate |
CN108876893A (en) * | 2017-12-14 | 2018-11-23 | 北京旷视科技有限公司 | Method, apparatus, system and the computer storage medium of three-dimensional facial reconstruction |
CN108460364B (en) * | 2018-03-27 | 2022-03-11 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating information |
CN109308488B (en) * | 2018-08-30 | 2022-05-03 | 深圳大学 | Mammary gland ultrasonic image processing device, method, computer equipment and storage medium |
CN109887077B (en) * | 2019-03-07 | 2022-06-03 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating three-dimensional model |
-
2019
- 2019-03-07 CN CN201910171928.9A patent/CN109887077B/en active Active
- 2019-10-29 WO PCT/CN2019/113902 patent/WO2020177348A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608737A (en) * | 2016-02-01 | 2016-05-25 | 成都通甲优博科技有限责任公司 | Human foot three-dimensional reconstruction method based on machine learning |
CN107808377A (en) * | 2017-10-31 | 2018-03-16 | 北京青燕祥云科技有限公司 | The localization method and device of focus in a kind of lobe of the lung |
CN107993277A (en) * | 2017-11-28 | 2018-05-04 | 河海大学常州校区 | Damage location artificial skelecton patch formation model method for reconstructing based on priori |
CN108053283A (en) * | 2017-12-15 | 2018-05-18 | 北京中睿华信信息技术有限公司 | A kind of custom made clothing method based on 3D modeling |
Also Published As
Publication number | Publication date |
---|---|
WO2020177348A1 (en) | 2020-09-10 |
CN109887077A (en) | 2019-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109887077B (en) | Method and apparatus for generating three-dimensional model | |
JP7075085B2 (en) | Systems and methods for whole body measurement extraction | |
CN107680684B (en) | Method and device for acquiring information | |
US10755411B2 (en) | Method and apparatus for annotating medical image | |
US20200234444A1 (en) | Systems and methods for the analysis of skin conditions | |
US11900594B2 (en) | Methods and systems for displaying a region of interest of a medical image | |
CN112712906B (en) | Video image processing method, device, electronic equipment and storage medium | |
US12118739B2 (en) | Medical image processing method, apparatus, and device, medium, and endoscope | |
CN108388889B (en) | Method and device for analyzing face image | |
CN111067531A (en) | Wound measuring method and device and storage medium | |
EP3574837A1 (en) | Medical information virtual reality server system, medical information virtual reality program, medical information virtual reality system, method of creating medical information virtual reality data, and medical information virtual reality data | |
WO2019146358A1 (en) | Learning system, method, and program | |
CN115601811B (en) | Face acne detection method and device | |
CN113610826A (en) | Puncture positioning method and device, electronic device and storage medium | |
Golomingi et al. | Augmented reality in forensics and forensic medicine–current status and future prospects | |
CN112331329A (en) | System and method for instantly judging hand bone age by using personal device | |
CN112862955B (en) | Method, apparatus, device, storage medium and program product for establishing three-dimensional model | |
US11776688B2 (en) | Capturing user constructed map of bodily region of interest for remote telemedicine navigation | |
CN109934798A (en) | Internal object information labeling method and device, electronic equipment, storage medium | |
CN115736939A (en) | Atrial fibrillation disease probability generation method and device, electronic equipment and storage medium | |
CN109118538A (en) | Image presentation method, system, electronic equipment and computer readable storage medium | |
CN109635696A (en) | Biological information detection method and device | |
CN113243932A (en) | Oral health detection system, related method, device and equipment | |
CN115546174B (en) | Image processing method, device, computing equipment and storage medium | |
CN115517686A (en) | Family environment electrocardiogram image analysis method, device, equipment, medium and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |