CN106709917B - Neural network model training method, device and system - Google Patents
Neural network model training method, device and system Download PDFInfo
- Publication number
- CN106709917B CN106709917B CN201710002230.5A CN201710002230A CN106709917B CN 106709917 B CN106709917 B CN 106709917B CN 201710002230 A CN201710002230 A CN 201710002230A CN 106709917 B CN106709917 B CN 106709917B
- Authority
- CN
- China
- Prior art keywords
- neural network
- network model
- image
- training
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The invention discloses a neural network model training method, device and system, and belongs to the field of image processing. The method comprises the following steps: receiving a plurality of client-sent medical samples, wherein a first client-sent medical sample comprises: the method comprises the following steps that a plurality of CT images and a first label image corresponding to each CT image are obtained, the first label image is used for identifying a designated organ contained in the CT images, the first label image is obtained by a first client side by segmenting the CT images through a local neural network model, and the first client side is any one of the client sides; training a first neural network model in the server according to the medical samples sent by the plurality of clients, wherein the first neural network model is the latest version of neural network model in the server. The invention shortens the training time of the neural network model and effectively improves the accuracy of the training of the neural network model. The method is used for training the neural network model.
Description
Technical Field
The invention relates to the field of image processing, in particular to a neural network model training method, device and system.
Background
Image segmentation is a technique of dividing an image into several specific regions with unique properties and extracting an object of interest. It is a key step from image processing to image analysis.
Currently, in the medical field, a neural network model may be used to segment a medical image, and the neural network model may be obtained by training an initial neural network model using a predetermined training sample (for example, the initial neural network model may be trained using a steepest descent method). Specifically, the server may collect a large number of training samples in advance on line, each training sample includes an original image and a segmentation result of the original image, the original neural network model is trained by using the training samples to obtain a trained neural network model, and after the neural network is successfully trained, a new neural network model version may be released for downloading by the client.
However, in the medical field, an original neural network model needs to be established first and training samples need to be collected offline when training the neural network model, so that the training time of the neural network model is long and the training accuracy is low.
Disclosure of Invention
In order to solve the problems of long training time and low training accuracy of a neural network model in the prior art, the embodiment of the invention provides a neural network model training method, device and system. The technical scheme is as follows:
in a first aspect, a neural network model training method is provided, which is applied to a server of a medical image segmentation system, and the method includes:
receiving a plurality of client-sent medical samples, wherein a first client-sent medical sample comprises: the method comprises the following steps that a plurality of CT images and a first label image corresponding to each CT image are obtained, the first label image is used for identifying a designated organ contained in the CT images, the first label image is obtained by a first client side by segmenting the CT images through a local neural network model, and the first client side is any one of the client sides;
training a first neural network model in the server according to the medical samples sent by the plurality of clients, wherein the first neural network model is the latest version of neural network model in the server.
Optionally, the training a first neural network model in the server according to the medical samples sent by the plurality of clients includes:
deleting inaccurate medical samples from the medical samples sent by the plurality of clients to obtain training samples;
training the first neural network model in the server using the training samples.
Optionally, the deleting inaccurate medical samples from the medical samples sent by the plurality of clients to obtain training samples includes:
sequentially displaying a plurality of mask images of a first sample, wherein each mask image is formed by superposing one label image in the first sample on a corresponding CT image, and the first sample is any one of medical samples sent by a plurality of clients;
receiving a deletion operation of the first sample triggered manually on an interface where any mask image is located or an interface where the first sample is located;
and deleting the first sample as an inaccurate sample according to the deleting operation.
Optionally, the deleting inaccurate medical samples from the medical samples sent by the plurality of clients to obtain training samples includes:
segmenting each CT image in the first sample by adopting a standard neural network model preset in the server to obtain a second label image corresponding to each CT image, wherein the second label image is used for identifying a designated organ contained in the CT image, and the first sample is any one of medical samples sent by the plurality of clients;
judging whether a segmentation image difference value of the CT image is larger than a preset difference threshold value, wherein the segmentation image difference value is an image difference value of the first label image and the second label image corresponding to the same CT image;
when the proportion of the CT image with the segmentation image difference value larger than a preset difference value threshold value in the first sample is larger than a preset ratio, storing the first sample into a manual confirmation database, wherein the sample in the manual confirmation database is used for manually confirming whether to delete or not;
when the proportion of the CT image with the segmentation image difference value larger than a preset difference value threshold value in the first sample is not larger than a preset ratio, storing the first sample to a training sample database, wherein the samples in the training sample database are used for training a neural network model;
and determining samples which are confirmed not to be deleted in the manual confirmation database and/or samples in the training sample database as the training samples.
Optionally, after the training of the first neural network model in the server according to the medical samples sent by the plurality of clients, the method further comprises:
sending a second neural network model to a plurality of test clients, wherein the second neural network model is obtained by training a first neural network model in the server according to medical samples sent by the plurality of clients;
receiving a score for the second neural network model by the plurality of test clients;
determining a test score according to the scores of the plurality of test clients on the second neural network model;
judging whether the test score is larger than a preset passing score or not;
when the test score is larger than a preset passing score, determining the second neural network model as a neural network model of the latest version;
and when the test score is not greater than a preset passing score, determining the first neural network model as the latest version of neural network model.
In a second aspect, a neural network model training method is provided, which is applied to a first client of a medical image segmentation system, and the method includes:
acquiring a sample to be segmented comprising a plurality of CT images of computer tomography, wherein the CT images comprise images of specified organs;
segmenting each CT image by adopting a local neural network model in the first client to obtain a first label image corresponding to each CT image, wherein the first label image is used for identifying a designated organ contained in the CT image;
sending a medical sample to the server so that the server trains a first neural network model in the server according to the medical sample, wherein the first neural network model is a neural network model of a latest version in the server, and the medical sample comprises: each CT image and the first label image corresponding to each CT image.
In a third aspect, an apparatus for training a neural network model is provided, which is applied to a server of a medical image segmentation system, and the apparatus includes:
the first receiving module is used for receiving medical samples sent by a plurality of clients, wherein the medical samples sent by the first client comprise: the method comprises the following steps that a plurality of CT images and a first label image corresponding to each CT image are obtained, the first label image is used for identifying a designated organ contained in the CT images, the first label image is obtained by a first client side by segmenting the CT images through a local neural network model, and the first client side is any one of the client sides;
and the training module is used for training a first neural network model in the server according to the medical samples sent by the plurality of clients, wherein the first neural network model is the neural network model of the latest version in the server.
Optionally, the training module comprises:
the deleting submodule is used for deleting inaccurate medical samples in the medical samples sent by the plurality of clients to obtain training samples;
and the training submodule is used for training the first neural network model in the server by adopting the training sample.
In a fourth aspect, a neural network model training apparatus is provided, which is applied to a first client of a medical image segmentation system, and the apparatus includes:
the device comprises an acquisition module, a segmentation module and a segmentation module, wherein the acquisition module is used for acquiring a sample to be segmented comprising a plurality of CT images of computer tomography, and the CT images comprise images of specified organs;
the segmentation module is used for segmenting each CT image by adopting a local neural network model in the first client to obtain a first label image corresponding to each CT image, wherein the first label image is used for identifying a specified organ contained in the CT image;
a sending module, configured to send a medical sample to the server, so that the server trains a first neural network model in the server according to the medical sample, where the first neural network model is a latest version of the neural network model in the server, and the medical sample includes: each CT image and the first label image corresponding to each CT image.
In a fifth aspect, a medical image segmentation system is provided, the medical image segmentation system comprising: the system comprises at least one server and at least one first client connected with the server;
the server comprises the neural network model training device of the third aspect;
each first client comprises the neural network model training device of the fourth aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
according to the neural network model training method, device and system provided by the embodiment of the invention, after each client in a plurality of clients adopts the latest neural network model to perform CT image segmentation, the server receives the medical sample containing a plurality of CT images and label images sent by the plurality of clients, and trains the latest neural network model, so that the online training of the neural network model is realized, the training time of the neural network model is shortened, and the accuracy of the neural network model training is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment related to a neural network model training method provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a neural network model training method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another neural network model training method provided by an embodiment of the present invention;
FIG. 4 is a flowchart of another neural network model training method provided by an embodiment of the present invention;
FIG. 5 is a flow chart of a first neural network model in a training server based on a plurality of medical samples sent by a client according to an embodiment of the present invention;
FIG. 6-1 is a schematic diagram of a mask image shown in accordance with an embodiment of the present invention;
FIG. 6-2 is a schematic diagram of another mask image shown in accordance with an embodiment of the present invention;
FIG. 7-1 is a schematic diagram of a model scoring interface provided by an embodiment of the present invention;
FIG. 7-2 is a schematic view of another model scoring interface provided by embodiments of the present invention;
fig. 8 is a block diagram of a neural network model training apparatus according to an embodiment of the present invention;
FIG. 9 is a block diagram of a training module according to an embodiment of the present invention;
FIG. 10 is a block diagram of another neural network model training apparatus according to an embodiment of the present invention;
FIG. 11 is a block diagram illustrating a neural network model training apparatus according to an embodiment of the present invention;
fig. 12 is a block diagram of a neural network model training apparatus according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram illustrating an implementation environment of a medical image segmentation system according to an embodiment of the present invention. The implementation environment may include: a server 110 and a plurality of clients 120 having display screens.
The server 110 may be a server, a server cluster composed of several servers, or a cloud computing service center. Client 120 is a medical device having a display screen.
The server 110 and the client 120 may establish a connection through a wired network or a wireless network, the client 120 may provide the server 110 with a training sample, the server 110 may train the neural network model using the training sample, and the client 120 may download the trained neural network model from the server 110 for medical image segmentation.
Fig. 2 is a flowchart of a neural network model training method applied to a server of a medical image segmentation system according to an embodiment of the present invention, as shown in fig. 2, the method may include:
In summary, in the neural network model training method provided in the embodiment of the present invention, after each client in the plurality of clients performs CT image segmentation using the latest neural network model through the server, the server receives the medical sample containing the plurality of CT images and the label images sent by the plurality of clients, and trains the latest version of neural network model, so as to implement online training of the neural network model, shorten the training time of the neural network model, and effectively improve the accuracy of the neural network model.
Fig. 3 is a flowchart of a neural network model training method provided in an embodiment of the present invention, where the method is applied to a first client of a medical image segmentation system, where the first client is any one of multiple clients, and as shown in fig. 3, the method may include:
In summary, in the neural network model training method provided in the embodiment of the present invention, after each client in the plurality of clients performs CT image segmentation using the latest neural network model, the server sends the medical sample including the plurality of CT images and the label image to the server, so that the server can train the latest neural network model according to the medical sample, thereby implementing online training of the neural network model, shortening the training time of the neural network model, and effectively improving the accuracy of the neural network model.
Fig. 4 is a flowchart of another neural network model training method provided in an embodiment of the present invention, where a server may train a latest neural network model in the server by collecting samples uploaded by each client, and the neural network model training method is applied in the implementation environment described in fig. 1, in the embodiment of the present invention, a first client in a plurality of clients is taken as an example for description. As shown in fig. 4, the method may include:
step 301, a first client acquires a sample to be segmented including a plurality of computed tomography CT images, where the CT images include an image of a specified organ.
Alternatively, each sample to be segmented may be a case of one patient. A patient's case is usually composed of several hundred CT images, each of which is an image containing a prescribed organ of the patient. An example of a patient includes 300 CT images, wherein each CT image is an image containing a designated organ of a patient, and for example, each CT image may be a CT image of a chest of a patient containing the designated organ of the chest of the patient, and the 300 CT images may constitute a sample to be segmented.
Step 302, the first client divides each CT image by using a local neural network model in the first client to obtain a first label image corresponding to each CT image.
Optionally, the local neural network model may be a latest version of the neural network model downloaded from the server by the first client, and the first label image is used to identify a specific organ included in the corresponding CT image.
In practical applications, multiple versions of the neural network model may be stored in the server, and each version of the neural network model may correspond to a version number that uniquely identifies the neural network model, and the version number is usually allocated by the server, for example, the version number may be 1.1 or 1.2, and the like. The first client can periodically request the server to download the neural network model, the server can provide the neural network model to the first client after receiving the request, and the neural network model provided by the server for each client is the latest version of the neural network model; or, the server may periodically push the latest version of the neural network model to each client, so that the first client acquires the latest version of the neural network model.
Generally, when a first client receives a segmentation instruction indicating that a CT image in a sample to be segmented is segmented, it may be determined whether a local neural network model in the first client is a latest version of a neural network model, and when the local neural network model is the latest version of the neural network model, the latest version of the neural network model is used to segment the CT image. In practical application, the first client may obtain a version number of a latest version of the neural network model in the server, and may determine whether the local neural network model in the first client is the latest version of the neural network model by comparing the version number with a version number of the local neural network model in the first client, and when the version number of the latest version of the neural network model in the server is greater than the version number of the local neural network model in the first client, the local neural network model in the first client is not the latest version of the neural network model; when the version number of the latest version of the neural network model in the server is equal to the version number of the local neural network model in the first client, the local neural network model in the first client is the latest version of the neural network model. For example, assuming that the version number of the latest version of the neural network model in the server acquired by the first client is 1.5, and the version number of the local neural network model in the first client is 1.2, it can be determined that the local neural network model in the first client is not the latest version of the neural network model if 1.5> 1.2.
When the first client side adopts the local neural network model to segment each CT image, the first label image corresponding to each CT image can be obtained by taking the designated organ as a segmentation target, and the first label image is used for identifying the designated organ contained in the corresponding CT image. Illustratively, the specified organ may be a liver organ. Alternatively, the first label image obtained by dividing may be a binary image, and the pixel value of the pixel corresponding to the designated organ in the first label image may be 1, and the pixel value of the pixel corresponding to the non-designated organ portion may be 0, or the first label image obtained by dividing may also be a grayscale image, and the pixel value of the pixel corresponding to the designated organ in the first label image may be 255, and the pixel value of the pixel corresponding to the non-designated organ portion may be 0. For example, when each CT image is segmented by using the local neural network model, a liver organ may be selected as the designated organ, and a first label image including the liver organ corresponding to each CT image may be obtained, where a pixel value of a pixel corresponding to a liver in the first label image is 1, and a pixel value of a pixel corresponding to a non-liver part is 0.
Optionally, the neural network model according to the embodiment of the present invention may be a convolutional neural network.
It should be noted that before each CT image is segmented by using the local neural network model, each CT image may be subjected to image preprocessing. The pretreatment process comprises the following steps:
step a, reading a CT image, and recording the window width W and the window level information M of the CT image.
Generally, CT images are images in Digital Imaging and communications in Medicine (DICOM) format.
The window width refers to the range of CT values of the CT image. The width of the window width influences the contrast of the image, the range of the CT value displayed by the narrow window width is small, the amplitude of the CT value represented by each level of gray scale is small, and the contrast of the image is strong. The window level refers to the average of the upper and lower CT values of the window width. The brightness of the image is affected by the height of the window level, the image at the lower window level is white, and the image at the higher window level is black.
And b, determining the gray level distribution range of the CT image according to the window width and the window level information of the CT image and a first formula.
The first formula is: α ═ M-W/2, β ═ M + W/2; wherein, W is the window width of the CT image, M is the window level of the CT image, alpha is the minimum value of the pixels corresponding to the designated organ in the DICOM image, and beta is the maximum value of the pixels corresponding to the designated organ in the CT image.
And c, performing gray level transformation on the CT image according to the gray level distribution range of the CT image.
Alternatively, the CT image may be grayscale transformed according to a second formula:
wherein, C (i, j) is the gray value of the pixel point at the position of the CT image (i, j) before transformation, and D (i, j) is the gray value of the pixel point at the position of the CT image (i, j) after transformation.
And d, performing morphological expansion operation on the CT image after the gray level transformation, and acquiring a connected region of the CT image after the expansion operation.
And e, determining a target connected region of the CT image, and cutting the CT image according to the target connected region to obtain the cut CT image.
Alternatively, the target connected component can be determined according to the size of the area occupied by the specified organ in the CT image. For example, when the designated organ is a liver, since the liver occupies the largest area in each CT image, the target connected component may be determined as the maximum connected component.
Optionally, when the CT image is cropped, the edge of the target connected region is tangent to the edge of the cropped CT image with the center of the target connected region. Because each CT image comprises a plurality of organs, if the CT image comprising the plurality of organs is directly segmented, the segmentation speed and the accuracy of the segmentation result are necessarily influenced to a certain degree, and therefore, before each CT image is segmented by adopting the local neural network model, the CT image can be cut according to the size of a target connected region where a specified organ is located so as to improve the segmentation speed and the accuracy of the segmentation result.
And f, zooming the cut CT image to a target size, and storing the zoomed image as a target image format.
When the neural network model is trained, the size of each image used for training is the same. However, the size of the designated organ may be different in each CT image, so the CT image cut out according to the size of the target connected region where the designated organ is located may have different sizes, and thus, the cut out CT image needs to be scaled to the target size.
Illustratively, the target size may be 301 × 400. In practical applications, the target size may be adjusted according to needs, and the embodiment of the present invention is not limited herein.
Alternatively, the target image format may be a BMP format, a JPEG format, a PNG format, or the like.
Optionally, after each CT image is segmented by using the local neural network model, fine tuning may be performed on the segmentation result in a manual manner, where the fine tuning may be represented by cutting a partial image in the segmentation result or adding a partial image in the segmentation result, so as to further improve the accuracy of the segmentation result.
In practical application, after the first client receives a segmentation instruction indicating that the segmentation instruction is used for segmenting a CT image in a sample to be segmented, if the local neural network model is not the neural network model of the latest version, the first client may present prompt information to a user for prompting the user to update the neural network model, and if the user indicates that the neural network model is not updated, the first client may perform the CT image segmentation by using the local neural network model. For example, the server may use the sample to train its corresponding neural network model, or may use the sample to train the latest version of the neural network model.
Step 303, the first client sends the medical sample to the server, where the medical sample includes: each CT image and the first label image corresponding to each CT image.
It should be noted that, when the client is in an idle state (i.e., a state where no other data is processed or transmitted), the first client may send the medical sample to the server, so as to avoid affecting the processing efficiency of the other data of the client.
Optionally, the client may also periodically send new medical samples to the server.
Step 304, the server trains a first neural network model in the server according to the medical samples sent by the plurality of clients, wherein the first neural network model is the neural network model of the latest version in the server.
In the existing neural network training method, all collected samples are used for training the neural network. However, when some of the collected samples have errors and the errors are large, if the samples are continuously used for training the neural network, the training precision is not increased, and the opposite result is brought. Therefore, after the server receives the medical samples sent by the plurality of clients, the received first samples can be screened, and then the screened samples are used as training samples to train the first neural network model in the server, so that the convergence speed and the training precision of the neural network are improved. For example, the screening process may be to remove inaccurate medical samples from a plurality of client-sent medical samples.
Alternatively, as shown in fig. 5, the process of training the first neural network model in the server according to the medical samples sent by the plurality of clients may include:
Optionally, the process of deleting an inaccurate medical sample from among medical samples sent by multiple clients by the server may have multiple realizable manners, and the following two realizable manners are taken as examples in the embodiment of the present invention for explanation.
In a first implementation manner, the deleting of the inaccurate medical sample can be realized by manual screening, which specifically includes:
step a1, the server displays a plurality of mask images of the first sample in turn, each mask image is formed by superposing one label image in the first sample on a corresponding CT image, and the first sample is any one of the medical samples sent by a plurality of clients.
Alternatively, the server may be provided with an input-output interface through which an external input-output device is connected, for example, the input-output device includes a display screen, and the server may sequentially display the plurality of mask images of the first sample on the display screen of the input-output device, which may be a display screen of a maintenance device configured for the server. The plurality of mask images of the first sample are images formed by superimposing one label image of the first sample on the corresponding CT image, the CT image is displayed as an original image in the mask images, the label image is displayed with a certain transparency, and pixels corresponding to a designated organ in the label image have a certain pixel value. By the display mode, the designated organs in the label image and the CT image can be conveniently checked in a contrast mode, so that the accuracy of the segmentation result can be conveniently judged. For example, a certain mask image displayed on the display screen may be as shown in fig. 6-1, and assuming that an image portion filled with dots in the mask image is an image portion corresponding to a specified organ in the label image, an image portion filled with oblique lines in the mask image is an image portion corresponding to a specified organ in the CT image, by viewing the mask image, it can be seen that the image corresponding to the specified organ in the CT image and the image corresponding to the specified organ in the label image have a mutually overlapped portion and a non-overlapped portion, the mutually overlapped portion is reflected as an accurate portion of the segmentation result, the non-overlapped portion is reflected as an inaccurate portion of the segmentation result, by comparing, it can be seen that the accuracy of the segmentation result is not accurate, and when the accuracy is poor to a certain degree, it can be considered that the segmentation result is not accurate. The dashed line in the figure may be used to indicate the center position of the image, by means of which the position of the designated organ can be determined.
For the convenience of user comparison, the server may also simultaneously display a three-dimensional image corresponding to the first sample in the display interface of the mask image, where the three-dimensional image is generated according to the plurality of CT images in the first sample. Illustratively, fig. 6-2 shows both the mask image (top left) and the three-dimensional image (bottom right) of the first specimen, while fig. 6-2 also shows a front perspective view (top right) and a left perspective view (bottom left) of the patient's abdominal cavity for easy viewing by the user, and dashed lines may be used to indicate the center position of the image by which the position of the designated organ can be determined. For the analysis of fig. 6-2, reference can be made to the analysis of fig. 6-1, and the details are not repeated here.
Step b1, the server receives a deletion operation of the first sample triggered by a human on the interface where any mask image is located or the interface where the first sample is located.
Optionally, when the maintenance person views the multiple mask images of the first sample at the server, and when the maintenance person considers that the multiple mask images are inaccurate and cause the inaccuracy of the first sample, the maintenance person may delete the first sample manually, and the maintenance person may trigger a deletion operation on the first sample manually at an interface where any mask image is located or an interface where the first sample is located, for example, the deletion operation may be generated by the maintenance person through a mouse in a corresponding interface operation. For example, the interface where each mask image is located or the interface where each sample is located may be provided with a delete button corresponding to the first sample, and when the delete button is manually clicked, a corresponding delete instruction is generated.
And c1, deleting the first sample as an inaccurate sample according to the deletion operation by the server.
Optionally, after the server receives a manually triggered deletion operation on the first sample, the corresponding first sample may be deleted as an inaccurate sample according to the deletion operation.
For example, the deleting operation on the first sample is triggered by the maintenance person on an interface corresponding to the mask image, for example, the interface may be an interface shown in fig. 6-1 or fig. 6-2, that is, after the server receives the manual triggering of the deleting operation on the interface where any mask image is located, according to the system setting, the server may default that the maintenance person considers that the first sample corresponding to the mask image is an inaccurate sample, and then the server may delete the first sample in the interface of the current mask image, or the maintenance person triggers the deleting operation on the interface corresponding to the first sample, and the server deletes the first sample accordingly.
In a second implementation manner, the method for deleting the inaccurate medical sample by the server and the manual screening may specifically include:
step a2, the server divides each CT image in the first sample by using a standard neural network model preset in the server to obtain a second label image corresponding to each CT image, the second label image is used for identifying a designated organ included in the CT image, and the first sample is any one of the medical samples sent by the plurality of clients.
Alternatively, the standard neural network model preset in the server is usually the most reliable neural network model preset by the maintenance personnel, and may be, for example, the latest version of the neural network model stored in the server.
Step b2, the server judges whether the difference value of the segmented images of the CT image is larger than a preset difference threshold value, and the difference value of the segmented images is the image difference value of the first label image and the second label image corresponding to the same CT image.
Alternatively, the image difference value of the first label image and the second label image may be represented by an area difference value of a specified organ in the corresponding image. The area difference may be expressed as a difference in the number of pixel points included in the specified organ in the image.
In practical applications, the preset difference threshold may be set according to actual conditions, and the embodiment of the present invention is not particularly limited thereto.
And c2, when the ratio of the CT image with the segmentation image difference value larger than the preset difference threshold value in the first sample is larger than the preset ratio, the server stores the first sample in a manual confirmation database, the manual confirmation database is used for manually confirming whether the sample is deleted, and the step e2 is executed.
Illustratively, the preset ratio may be 5%.
Alternatively, the proportion of the CT images with the segmented image difference value larger than the preset difference threshold value in the first sample can be represented by the ratio of the total number of the CT images with the segmented image difference value larger than the preset difference threshold value to the total number of the CT images contained in the first sample. For example, assuming that the preset ratio is 5%, the total number of CT images included in the first sample is 300, and the total number of CT images with a difference value larger than the preset difference threshold is 21, the ratio of the CT images with a difference value larger than the preset difference threshold in the first sample is 7%, and it can be known that 7% > 5%, the server stores the first sample in a manual confirmation database, and manually confirms whether the first sample is deleted.
And d2, when the proportion of the CT image with the segmentation image difference value larger than the preset difference value threshold value in the first sample is not larger than the preset ratio, the server stores the first sample to a training sample database, the samples in the training sample database are used for training a neural network model, and the step e2 is executed.
And e2, the server determines the samples which are confirmed not to be deleted in the manual confirmation database and/or the samples in the training sample database as training samples.
Optionally, the samples that are confirmed not to be deleted in the manual confirmation database may be determined as training samples, the samples in the training sample database may also be determined as training samples, and both the samples that are confirmed not to be deleted in the manual confirmation database and the samples in the training sample database may also be determined as training samples. For example, assuming that a sample confirmed not to be deleted in the database is sample 1, and samples in the training sample database are sample 2 and sample 3, the server may determine sample 1 as a training sample, may also determine sample 2 and sample 3 as training samples, or may also determine all of sample 1, sample 2, and sample 3 as training samples.
The server deletes the first realizable mode of inaccurate medical samples in the medical samples sent by the clients, and manual participation is needed, and the mode of manually participating in sample screening enables more abnormal samples to be collected, so that the range of accurately segmented samples is wider; the second implementation mode can reduce the workload of workers and ensure the accuracy of the segmentation result while realizing the automatic updating of the neural network model.
Alternatively, the server may pack all the training samples, and then use the packed training samples to train the first neural network model, which is the latest version of the neural network model in the server. In the packed training sample, each CT image corresponds to a corresponding first label image one-to-one.
Optionally, the first neural network model may be trained by using a feedforward back propagation algorithm, or the first neural network model may be trained by using other algorithms, and the training process may refer to the prior art, which is not described in detail in the embodiments of the present invention.
Step 305, the server sends the second neural network model to the first client.
The second neural network model is a neural network model obtained by training the first neural network model in the server according to the medical samples sent by the plurality of clients.
Optionally, since the performance of the trained second neural network model is not necessarily better than that of the first neural network model before training (for example, the accuracy of the trained neural network model is low due to too many error samples selected by training), after the server completes the training of the second neural network model, the server may send the trained second neural network model to a plurality of test clients for testing the image segmentation effect of the second neural network model. In the embodiment of the present invention, the first client may also be a preset test client, and therefore, the server may send the second neural network model to the first client.
Optionally, the plurality of test clients for testing may be part of the plurality of clients, or may be all of the plurality of clients. In practical application, which clients are used as the test clients can be set according to actual needs, and the embodiment of the present invention does not specifically limit the clients.
And step 306, displaying a model scoring interface after the first client uses the second neural network model.
Optionally, the first client may be a preset test client. After sending the medical sample to the server, the first client acting as a testing client may receive the second neural network model sent by the server and display a model scoring interface after using the second neural network model. The user (generally, a medical worker) can score the second neural network model in the model scoring interface, the score can represent the accuracy of the second neural network model on the medical image segmentation result, and the higher the score is, the more accurate the segmentation result is considered by the user. Illustratively, the score may be a percent score. The specific standard of the scoring may be set according to actual application, and the embodiment of the present invention does not specifically limit the standard.
For example, fig. 7-1 is a schematic diagram of a model scoring interface according to an embodiment of the present invention, where the model scoring interface only displays a scoring area 01, and fig. 7-2 is a schematic diagram of another model scoring interface according to an embodiment of the present invention, where the model scoring interface displays a scoring area 01, a segmentation result display area 02, and an image to be segmented display area 03. In the scoring area 01 shown in fig. 7-1 and 7-2, the user can score the second neural network model according to the own judgment criteria to represent the accuracy of the second neural network model on the medical image segmentation result. In practical applications, the model scoring interface may have other forms, and the embodiment of the present invention does not specifically limit the form.
And 307, the first client receives the scoring of the second neural network model by the user through the model scoring interface.
For example, in scoring area 01 shown in fig. 7-1 and 7-2, the user may score the second neural network model by mouse point-striking a number in the scoring area and ". times", and the score may be modified by mouse point-striking an "x" in the scoring area.
And step 308, the first client sends the score to the server.
Step 309, the server determines a test score according to the score of the first client to the second neural network model.
Optionally, after receiving the scores of the second neural network model by the multiple test clients, the server may determine a test score according to the scores, where the test score may be an average score of the scores of the multiple test clients, or a lowest score of the scores of the multiple test clients, or may be a score calculated according to degrees of the multiple test clients on the test scores, and for example, the degrees of the multiple test clients on the test scores may be represented by weights. In practical application, the mode of determining the test score according to the multiple scores may be set according to practice, and the embodiment of the present invention is not particularly limited thereto.
For example, assume that three clients among the multiple clients are testing clients, client 1, client 2, and client 3, scores of the three clients received by the server are 80, 85, and 90, respectively, and weights of degrees of influence of the three clients on the testing scores are 0.2, 0.3, and 0.5, respectively. When the test score is an average score of scores of the plurality of test clients, it may be determined that the test score is (80+85+90)/3 ═ 85; when the test score is the lowest score of the scores of the plurality of test clients, the test score may be determined to be 80; when the test score is a score calculated from the degree of influence of the three test clients on the test score, it may be determined that the test score is 80 × 0.2+85 × 0.3+90 × 0.5 — 86.5.
Step 3010, the server determines whether the test score is greater than a predetermined passing score.
Alternatively, the preset passing score may be the lowest score representing that the accuracy of the segmentation result of the current second neural network model is higher than the accuracy of the segmentation result of the first neural network before training. The predetermined passing score may be a score value that is manually set, for example, the predetermined passing score may be manually set to 80; the test score when the first neural network model is tested may also be, for example, if the test score when the first neural network model is tested is 82, the preset passing score may be set to 82; alternatively, the preset passing score may be set according to the situation, and the embodiment of the present invention does not specifically limit the passing score.
The test score is larger than a preset passing score, namely, the accuracy of the segmentation result of the currently used second neural network model is higher than that of the segmentation result of the first neural network model before training; the test score is not greater than the preset passing score, that is, the accuracy of the segmentation result of the current second neural network model is not higher than the accuracy of the segmentation result of the first neural network before training.
And 3011, when the test score is greater than the preset passing score, the server determines the second neural network model as the latest version of neural network model.
For example, assuming that the preset passing score is 80 and the test score of the second neural network model is 82, it can be known that 82>80, i.e. the test score is greater than the preset passing score, the second neural network model can be determined as the latest version of the neural network model.
And step 3012, when the test score is not greater than the preset passing score, the server determines the first neural network model as the latest version of neural network model.
For example, assuming that the preset passing score is 80 and the test score of the second neural network model is 79, it can be known that 79<80, i.e., the test score is not greater than the preset passing score, the first neural network model can be determined as the latest version of the neural network model.
Optionally, after the CT image is segmented by using the local neural network model, a plurality of clients included in the medical image segmentation system may score the local neural network model, that is, even if a certain client is not a test client, the client may also score the local neural network model, the score may be a conventional score, and the client may feed back the segmentation accuracy of the local neural network model used by the client to the server through the score. The scoring process may be: and displaying a model scoring interface, receiving the scoring of the local neural network model by the user through the model scoring interface, and sending the scoring to the server. The scoring of the second neural network model by the test client may be referred to for the relevant description of the scoring and the specific implementation process of the scoring, and details are not described here.
It should be noted that, the conventional scoring of the first neural network model by the plurality of clients included in the medical image segmentation system is optional, and no mandatory requirement is made on the conventional scoring, that is, after the local neural network model is used to segment the CT image, the user may select to score the local neural network model or not score the local neural network model.
Optionally, multiple versions of the neural network model may be stored in the server, where the latest version of the neural network model is the neural network model currently used by the client, and the other versions of the neural network model are only used for recording or reference by a background maintenance person.
In the medical field, the corresponding neural network model may be established for different organs, the client performs medical image segmentation based on the neural network model, and the label image obtained after segmentation includes a designated organ, the above embodiment of the present invention mainly takes the designated organ as a liver as an example, in practical applications, the designated organ may also be a heart or a brain, and the training method of the corresponding neural network model may refer to the training method of the neural network model of the liver, which is not described in detail in the embodiment of the present invention.
It should be noted that, the order of the steps of the neural network model training method provided in the embodiment of the present invention may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by those skilled in the art within the technical scope disclosed in the present application shall be covered by the protection scope of the present application, and therefore, no further description is given.
In summary, in the neural network model training method provided in the embodiment of the present invention, after each client in the plurality of clients performs CT image segmentation using the latest neural network model through the server, the medical sample including the plurality of CT images and the label images sent by the plurality of clients is received, the latest version of the neural network model is determined according to the score of the test client, and the latest version of the neural network model is trained, so that online training of the neural network model is implemented, the training time of the neural network model is shortened, and the accuracy of the neural network model is effectively improved.
Fig. 8 is a block diagram illustrating a neural network model training apparatus 800 according to an embodiment of the present invention, where the neural network model training apparatus 800 is applied to a server of a medical image segmentation system, and as shown in fig. 8, the neural network model training apparatus 800 may include:
a first receiving module 801, configured to receive medical samples sent by multiple clients, where the medical samples sent by the first client include: the first label image is used for identifying a designated organ contained in the CT image, the first label image is obtained by a first client side by segmenting the CT images by adopting a local neural network model, and the first client side is any one of the client sides.
The training module 802 is configured to train a first neural network model in the server according to the medical samples sent by the multiple clients, where the first neural network model is a neural network model of a latest version in the server.
In summary, in the neural network model training device provided in the embodiment of the present invention, after each client in the plurality of clients performs CT image segmentation using the latest neural network model, the first receiving module receives the medical sample containing the plurality of CT images and the label images sent by the plurality of clients, and the training module trains the latest version of neural network model, so as to implement online training of the neural network model, shorten the training time of the neural network model, and effectively improve the accuracy of neural network model training.
As shown in fig. 9, training module 802 may include:
the deleting submodule 8021 is configured to delete an inaccurate medical sample from the medical samples sent by the multiple clients to obtain a training sample.
A training submodule 8022 for training the first neural network model in the server using the training samples.
Optionally, the delete sub-module 8021 is specifically configured to:
and sequentially displaying a plurality of mask images of the first sample, wherein each mask image is formed by superposing one label image in the first sample on a corresponding CT image, and the first sample is any one of the medical samples sent by a plurality of clients.
And receiving a deletion operation of the first sample triggered manually on the interface where any mask image is located or the interface where the first sample is located.
The first sample is deleted as an inaccurate sample according to the deletion operation.
Optionally, the delete sub-module 8021 is specifically configured to:
and segmenting each CT image in the first sample by adopting a standard neural network model preset in the server to obtain a second label image corresponding to each CT image, wherein the second label image is used for identifying a designated organ contained in the CT image, and the first sample is any one of the medical samples sent by the plurality of clients.
And judging whether the difference value of the segmented images of the CT image is larger than a preset difference threshold value, wherein the difference value of the segmented images is the image difference value of a first label image and a second label image corresponding to the same CT image.
And when the ratio of the CT image with the segmentation image difference value larger than the preset difference value threshold value in the first sample is larger than the preset ratio, storing the first sample into a manual confirmation database, wherein the sample in the manual confirmation database is used for manually confirming whether to delete or not.
And when the ratio of the CT image with the segmentation image difference value larger than the preset difference value threshold in the first sample is not larger than the preset ratio, storing the first sample to a training sample database, wherein the samples in the training sample database are used for training a neural network model.
And determining samples which are confirmed not to be deleted in the manual confirmation database and/or samples in the training sample database as training samples.
As shown in fig. 10, the neural network model training apparatus 800 may further include:
a sending module 803, configured to send a second neural network model to the multiple test clients, where the second neural network model is a neural network model obtained according to the first neural network model in the medical sample training server sent by the multiple clients.
A second receiving module 804, configured to receive scores of the second neural network model by the plurality of test clients.
A first determining module 805 for determining a test score based on the scoring of the second neural network model by the plurality of test clients.
A determining module 806, configured to determine whether the test score is greater than a preset passing score.
A second determining module 807 for determining the second neural network model as the latest version of the neural network model when the test score is greater than the preset passing score.
And a third determining module 808, configured to determine the first neural network model as the latest version of the neural network model when the test score is not greater than the preset passing score.
In summary, in the neural network model training device provided in the embodiment of the present invention, after each client in the plurality of clients performs CT image segmentation using the latest neural network model, the first receiving module receives the medical sample containing the plurality of CT images and the label images sent by the plurality of clients, and determines the latest version of the neural network model according to the score of the test client, and the training module trains the latest version of the neural network model, so as to implement online training of the neural network model, shorten the training time of the neural network model, and effectively improve the accuracy of neural network model training.
Fig. 11 is a block diagram illustrating a neural network model training apparatus 900 according to an embodiment of the present invention, where the neural network model training apparatus 900 is applied to a first client of a medical image segmentation system, and as shown in fig. 11, the neural network model training apparatus 900 may include:
an acquiring module 901, configured to acquire a sample to be segmented including a plurality of CT images of computed tomography, where the CT images include an image of a specified organ.
A segmentation module 902, configured to segment each CT image by using a local neural network model in the first client to obtain a first label image corresponding to each CT image, where the first label image is used to identify a designated organ included in the CT image.
A sending module 903, configured to send the medical sample to the server, so that the server trains a first neural network model in the server according to the medical sample, where the first neural network model is a neural network model of a latest version in the server, and the medical sample includes: each CT image and the first label image corresponding to each CT image.
In summary, according to the neural network model training device provided in the embodiment of the present invention, after each client in the plurality of clients performs CT image segmentation using the latest neural network model, the sending module sends the medical sample including the plurality of CT images and the label images to the server, so that the server can train the latest version of neural network model according to the medical sample, thereby implementing online training of the neural network model, shortening the training time of the neural network model, and effectively improving the accuracy of the neural network model training.
It should be noted that the first client may also be a preset test client. When the first client is a preset test client, as shown in fig. 12, the neural network model training apparatus 900 may further include:
a first receiving module 904, configured to receive the second neural network model sent by the server.
A display module 905 for displaying the model scoring interface after using the second neural network model.
A second receiving module 906, configured to receive a user's scoring of the second neural network model through the model scoring interface.
A score sending module 907 for sending the score to the server.
In summary, according to the neural network model training device provided in the embodiment of the present invention, after each of the plurality of clients performs CT image segmentation using the latest neural network model, the sending module sends the medical sample including the plurality of CT images and the label images to the server, and the score sending module sends the score of the test client to the server, so that the server determines the latest version of the neural network model, and trains the latest version of the neural network model according to the medical sample, thereby implementing online training of the neural network model, shortening the training time of the neural network model, and effectively improving the accuracy of the neural network model training.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and sub-modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present invention further provides a medical image segmentation system, including: at least one server including the neural network model training device 800 and at least one first client connected to the server including the neural network model training device 900.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A neural network model training method, applied to a server of a medical image segmentation system, the method comprising:
receiving medical samples sent by a plurality of clients in an idle state, wherein the medical samples sent by a first client comprise: the method comprises the steps that a plurality of Computed Tomography (CT) images containing specified organs of a patient and a first label image corresponding to each CT image are obtained, the first label images are used for identifying the specified organs contained in the CT images, the first label images are obtained by a first client side by dividing the CT images through a local neural network model, the first client side is any one of the client sides, and the local neural network model in the first client side is a latest version of neural network model downloaded from a server by the first client side;
training a first neural network model in the server according to the medical samples sent by the plurality of clients, wherein the first neural network model is the neural network model of the latest version in the server;
testing a second neural network model through a testing client to determine the effectiveness of the second neural network model, wherein the second neural network model is obtained by training a first neural network model in the server according to medical samples sent by the plurality of clients;
wherein the training of the first neural network model in the server according to the medical samples sent by the plurality of clients comprises:
screening the medical samples sent by the plurality of clients in a manual screening mode and/or a server screening mode of the medical image segmentation system to delete inaccurate medical samples in the medical samples sent by the plurality of clients to obtain training samples;
packing the training samples, wherein in the packed training samples, each CT image corresponds to the corresponding first label image one by one;
and training the first neural network model in the server by adopting the packed training samples.
2. The method of claim 1, wherein the removing inaccurate medical samples from the plurality of client-sent medical samples to obtain training samples comprises:
sequentially displaying a plurality of mask images of a first sample and a three-dimensional image corresponding to the first sample, wherein each mask image is formed by superposing one label image in the first sample on a corresponding CT image, the first sample is any one sample in medical samples sent by a plurality of clients, the three-dimensional image corresponding to the first sample is generated according to the plurality of CT images in the first sample, the label image has transparency, and pixels corresponding to a specified organ in the label image have pixel values;
receiving a deletion operation of the first sample triggered manually on an interface where any mask image is located or an interface where the first sample is located;
and deleting the first sample as an inaccurate sample according to the deleting operation.
3. The method of claim 1, wherein the removing inaccurate medical samples from the plurality of client-sent medical samples to obtain training samples comprises:
segmenting each CT image in a first sample by adopting a standard neural network model preset in the server to obtain a second label image corresponding to each CT image, wherein the second label image is used for identifying a designated organ contained in the CT image, and the first sample is any one of medical samples sent by the plurality of clients;
judging whether a segmentation image difference value of the CT image is larger than a preset difference threshold value, wherein the segmentation image difference value is an image difference value of the first label image and the second label image corresponding to the same CT image;
when the proportion of the CT image with the segmentation image difference value larger than a preset difference value threshold value in the first sample is larger than a preset ratio, storing the first sample into a manual confirmation database, wherein the sample in the manual confirmation database is used for manually confirming whether to delete or not;
when the proportion of the CT image with the segmentation image difference value larger than a preset difference value threshold value in the first sample is not larger than a preset ratio, storing the first sample to a training sample database, wherein the samples in the training sample database are used for training a neural network model;
and determining samples which are confirmed not to be deleted in the manual confirmation database and/or samples in the training sample database as the training samples.
4. The method of any one of claims 1 to 3, wherein the testing the second neural network model by the testing client comprises:
sending the second neural network model to a plurality of test clients;
receiving a score for the second neural network model by the plurality of test clients;
determining a test score according to the scores of the plurality of test clients on the second neural network model;
judging whether the test score is larger than a preset passing score or not;
when the test score is larger than a preset passing score, determining the second neural network model as a neural network model of the latest version;
and when the test score is not greater than a preset passing score, determining the first neural network model as the latest version of neural network model.
5. A neural network model training method is applied to a first client of a medical image segmentation system, wherein the first client is any one of a plurality of clients, and the method comprises the following steps:
acquiring a sample to be segmented comprising a plurality of CT images of computer tomography, wherein the CT images are images containing specified organs;
segmenting each CT image by adopting a local neural network model in the first client to obtain a first label image corresponding to each CT image, wherein the first label image is used for identifying a specified organ contained in the CT image, and the local neural network model in the first client is a latest version of neural network model downloaded from a server by the first client;
sending medical samples to the server in an idle state, so that the server can screen the medical samples sent by the plurality of clients in a manual screening and/or server screening mode of the medical image segmentation system to delete inaccurate medical samples in the medical samples sent by the plurality of clients to obtain training samples, packaging the training samples, training a first neural network model in the server by using the packaged training samples, and testing the second neural network model by a testing client after the server trains the first neural network model to obtain a second neural network model to determine the effectiveness of the second neural network model, wherein each CT image in the packaged training samples corresponds to the corresponding first label image one by one, the first neural network model is a latest version of the neural network model in the server, and the medical samples include: each CT image and the first label image corresponding to each CT image.
6. A neural network model training apparatus, applied to a server of a medical image segmentation system, the apparatus comprising:
the first receiving module is used for receiving medical samples sent by a plurality of clients in an idle state, wherein the medical samples sent by the first client comprise: the system comprises a plurality of CT images containing specified organs of a patient and a first label image corresponding to each CT image, wherein the first label image is used for identifying the specified organs contained in the CT images, the first label image is obtained by a first client side by segmenting the plurality of CT images by adopting a local neural network model, the first client side is any one of the plurality of client sides, and the local neural network model in the first client side is a latest version of neural network model downloaded from a server by the first client side;
the training module is used for training a first neural network model in the server according to the medical samples sent by the plurality of clients, wherein the first neural network model is the neural network model of the latest version in the server;
the apparatus is further configured to: testing a second neural network model through a testing client to determine the effectiveness of the second neural network model, wherein the second neural network model is obtained by training a first neural network model in the server according to medical samples sent by the plurality of clients;
wherein the training module comprises:
the deleting submodule is used for screening the medical samples sent by the plurality of clients in a manual screening mode and/or a server screening mode of the medical image segmentation system so as to delete inaccurate medical samples in the medical samples sent by the plurality of clients and obtain training samples;
and the training submodule is used for packaging the training samples, training the first neural network model in the server by adopting the packaged training samples, and enabling each CT image to correspond to the corresponding first label image in the packaged training samples one by one.
7. A neural network model training apparatus, applied to a first client of a medical image segmentation system, the first client being any one of a plurality of clients, the apparatus comprising:
the device comprises an acquisition module, a segmentation module and a segmentation module, wherein the acquisition module is used for acquiring a sample to be segmented comprising a plurality of CT images of computer tomography, and the CT images are images containing specified organs;
the segmentation module is used for segmenting each CT image by adopting a local neural network model in the first client to obtain a first label image corresponding to each CT image, wherein the first label image is used for identifying a specified organ contained in the CT image, and the local neural network model in the first client is a latest version of neural network model downloaded from a server by the first client;
a sending module, configured to send medical samples to the server in an idle state, so that the server filters the medical samples sent by the multiple clients in a manual filtering and/or server filtering manner of the medical image segmentation system, so as to delete inaccurate medical samples from the medical samples sent by the multiple clients, obtain training samples, package the training samples, train a first neural network model in the server using the packaged training samples, and facilitate the server to test a second neural network model by a testing client after training the first neural network model to obtain the second neural network model, so as to determine validity of the second neural network model, where in the packaged training samples, each CT image corresponds to a corresponding first label image one to one, the first neural network model is a latest version of the neural network model in the server, and the medical samples include: each CT image and the first label image corresponding to each CT image.
8. A medical image segmentation system, characterized in that the medical image segmentation system comprises: the system comprises at least one server and at least one first client connected with the server;
the server comprises the neural network model training device of claim 6;
each of the first clients includes the neural network model training device of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710002230.5A CN106709917B (en) | 2017-01-03 | 2017-01-03 | Neural network model training method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710002230.5A CN106709917B (en) | 2017-01-03 | 2017-01-03 | Neural network model training method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106709917A CN106709917A (en) | 2017-05-24 |
CN106709917B true CN106709917B (en) | 2020-09-11 |
Family
ID=58905798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710002230.5A Active CN106709917B (en) | 2017-01-03 | 2017-01-03 | Neural network model training method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106709917B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205527B (en) * | 2016-12-16 | 2022-01-18 | 深圳联友科技有限公司 | Drawing method and device for engine data balance |
WO2019056309A1 (en) * | 2017-09-22 | 2019-03-28 | Shenzhen United Imaging Healthcare Co., Ltd. | Method and system for generating a phase contrast image |
US10803984B2 (en) * | 2017-10-06 | 2020-10-13 | Canon Medical Systems Corporation | Medical image processing apparatus and medical image processing system |
EP3477591B1 (en) * | 2017-10-24 | 2020-05-27 | AGFA Healthcare | Avoiding catastrophic interference while training an artificial neural network on an additional task |
CN107817204B (en) * | 2017-11-01 | 2018-12-28 | 中国科学院地质与地球物理研究所 | A kind of shale micro-void structures analysis method and device |
CN108875508B (en) * | 2017-11-23 | 2021-06-29 | 北京旷视科技有限公司 | Living body detection algorithm updating method, device, client, server and system |
CN110353707A (en) * | 2018-03-26 | 2019-10-22 | 通用电气公司 | The training method and system of collimator boundary detection method |
CN109166107A (en) * | 2018-04-28 | 2019-01-08 | 北京市商汤科技开发有限公司 | A kind of medical image cutting method and device, electronic equipment and storage medium |
CN110766693B (en) * | 2018-09-06 | 2022-06-21 | 北京连心医疗科技有限公司 | Method for jointly predicting radiotherapy structure position based on multi-model neural network |
CN109214343B (en) * | 2018-09-14 | 2021-03-09 | 北京字节跳动网络技术有限公司 | Method and device for generating face key point detection model |
WO2020062262A1 (en) | 2018-09-30 | 2020-04-02 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for generating a neural network model for image processing |
CN110969072B (en) * | 2018-09-30 | 2023-05-02 | 杭州海康威视系统技术有限公司 | Model optimization method, device and image analysis system |
CN109615058A (en) * | 2018-10-24 | 2019-04-12 | 上海新储集成电路有限公司 | A kind of training method of neural network model |
CN109492698B (en) * | 2018-11-20 | 2022-11-18 | 腾讯科技(深圳)有限公司 | Model training method, object detection method and related device |
CN109544550B (en) * | 2018-12-05 | 2021-10-22 | 易必祥 | CT image-based intelligent detection and identification method and system |
CN109977957A (en) * | 2019-03-04 | 2019-07-05 | 苏宁易购集团股份有限公司 | A kind of invoice recognition methods and system based on deep learning |
CN110123249B (en) * | 2019-04-09 | 2022-02-01 | 苏州西能捷科技发展有限公司 | Nasosinusitis detection device and use method thereof |
CN110232411B (en) * | 2019-05-30 | 2022-08-23 | 北京百度网讯科技有限公司 | Model distillation implementation method, device, system, computer equipment and storage medium |
CN110321864A (en) * | 2019-07-09 | 2019-10-11 | 西北工业大学 | Remote sensing images explanatory note generation method based on multiple dimensioned cutting mechanism |
CN110503151B (en) * | 2019-08-26 | 2020-11-03 | 北京推想科技有限公司 | Image processing method and system |
CN110570419A (en) * | 2019-09-12 | 2019-12-13 | 杭州依图医疗技术有限公司 | Method and device for acquiring characteristic information and storage medium |
CN110766694B (en) * | 2019-09-24 | 2021-03-26 | 清华大学 | Interactive segmentation method of three-dimensional medical image |
CN110853024B (en) * | 2019-11-14 | 2020-12-22 | 推想医疗科技股份有限公司 | Medical image processing method, medical image processing device, storage medium and electronic equipment |
CN110969622B (en) * | 2020-02-28 | 2020-07-24 | 南京安科医疗科技有限公司 | Image processing method and system for assisting pneumonia diagnosis |
CN111665177A (en) * | 2020-06-11 | 2020-09-15 | 太原理工大学 | Laboratory protection system based on object recognition, toxic gas and heat source detection |
CN111882048A (en) * | 2020-09-28 | 2020-11-03 | 深圳追一科技有限公司 | Neural network structure searching method and related equipment |
CN112350995A (en) * | 2020-09-30 | 2021-02-09 | 山东众阳健康科技集团有限公司 | Image processing method, device, equipment and storage medium |
CN113009077B (en) * | 2021-02-18 | 2023-05-02 | 南方电网数字电网研究院有限公司 | Gas detection method, gas detection device, electronic equipment and storage medium |
CN113223101B (en) * | 2021-05-28 | 2022-12-09 | 支付宝(杭州)信息技术有限公司 | Image processing method, device and equipment based on privacy protection |
CN114938966A (en) * | 2022-03-25 | 2022-08-26 | 康达洲际医疗器械有限公司 | Temperature field acquisition system and method based on nano probe photo-thermal sensitization |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102955946A (en) * | 2011-08-18 | 2013-03-06 | 刘军 | Two-stage fast classifier based on linear classification tree and neural network |
CN103489009A (en) * | 2013-09-17 | 2014-01-01 | 北方信息控制集团有限公司 | Pattern recognition method based on self-adaptation correction neural network |
CN105160361A (en) * | 2015-09-30 | 2015-12-16 | 东软集团股份有限公司 | Image identification method and apparatus |
CN105447498A (en) * | 2014-09-22 | 2016-03-30 | 三星电子株式会社 | A client device configured with a neural network, a system and a server system |
-
2017
- 2017-01-03 CN CN201710002230.5A patent/CN106709917B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102955946A (en) * | 2011-08-18 | 2013-03-06 | 刘军 | Two-stage fast classifier based on linear classification tree and neural network |
CN103489009A (en) * | 2013-09-17 | 2014-01-01 | 北方信息控制集团有限公司 | Pattern recognition method based on self-adaptation correction neural network |
CN105447498A (en) * | 2014-09-22 | 2016-03-30 | 三星电子株式会社 | A client device configured with a neural network, a system and a server system |
CN105160361A (en) * | 2015-09-30 | 2015-12-16 | 东软集团股份有限公司 | Image identification method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN106709917A (en) | 2017-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106709917B (en) | Neural network model training method, device and system | |
US11657487B2 (en) | Focus-weighted, machine learning disease classifier error prediction for microscope slide images | |
US20190102878A1 (en) | Method and apparatus for analyzing medical image | |
CN109636808B (en) | Lung lobe segmentation method based on full convolution neural network | |
JP2024079743A (en) | Method for analyzing image, device, program, and method for manufacturing learned deep learning algorithm | |
CN111696083B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111488921A (en) | Panoramic digital pathological image intelligent analysis system and method | |
CN113052795A (en) | X-ray chest radiography image quality determination method and device | |
JP2017510427A (en) | Radiation image lung segmentation technology and bone attenuation technology | |
CN111476777A (en) | Chest radiography image processing method, system, readable storage medium and equipment | |
CN102573641B (en) | Image processing device and image processing method | |
CN110223279A (en) | A kind of image processing method and device, electronic equipment | |
CN112785591B (en) | Method and device for detecting and segmenting rib fracture in CT image | |
CN109983502A (en) | The device and method of quality evaluation for medical images data sets | |
CN114240874A (en) | Bone age assessment method and device based on deep convolutional neural network and feature fusion and computer readable storage medium | |
CN107978003B (en) | CT image metal artifact processing method and device | |
CN116312986A (en) | Three-dimensional medical image labeling method and device, electronic equipment and readable storage medium | |
CN115601811A (en) | Facial acne detection method and device | |
CN109389577B (en) | X-ray image processing method and system, and computer storage medium | |
US20220122261A1 (en) | Probabilistic Segmentation of Volumetric Images | |
Jian et al. | Cloud image processing and analysis based flatfoot classification method | |
CN115330696A (en) | Detection method, device and equipment of bracket and storage medium | |
CN109671095B (en) | Method and related device for separating metal objects in X-ray photo | |
CN114037775A (en) | Bone structure growth method and device, electronic equipment and storage medium | |
CN111612755A (en) | Lung focus analysis method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |