Nothing Special   »   [go: up one dir, main page]

CN113012112B - Evaluation system for thrombus detection - Google Patents

Evaluation system for thrombus detection Download PDF

Info

Publication number
CN113012112B
CN113012112B CN202110222887.9A CN202110222887A CN113012112B CN 113012112 B CN113012112 B CN 113012112B CN 202110222887 A CN202110222887 A CN 202110222887A CN 113012112 B CN113012112 B CN 113012112B
Authority
CN
China
Prior art keywords
limb
image
unit
images
imaging unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110222887.9A
Other languages
Chinese (zh)
Other versions
CN113012112A (en
Inventor
高兰
郭桂丽
郭然
张和艳
徐倩
田红敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwu Hospital
Original Assignee
Xuanwu Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwu Hospital filed Critical Xuanwu Hospital
Priority to CN202110222887.9A priority Critical patent/CN113012112B/en
Publication of CN113012112A publication Critical patent/CN113012112A/en
Application granted granted Critical
Publication of CN113012112B publication Critical patent/CN113012112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1075Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions by non-invasive methods, e.g. for determining thickness of tissue layer
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1077Measuring of profiles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to an evaluation system for thrombus detection, which at least comprises a light projecting unit and an imaging unit, wherein the light projecting unit projects marking light rays which are convenient for the imaging unit to identify the outline of a limb in a target space where the limb to be detected is positioned, the marking light rays emitted into the target space by the light projecting unit can be recorded by a first image acquired by the imaging unit, and a plurality of imaging units are distributed at a set space position relative to the limb to be detected in a mode that the imaging units can synchronously acquire a plurality of first images at multiple angles, so that the imaging units can acquire images marked on the surface of a designated section containing the limb to be detected; the imaging unit transmits the acquired image data to the processing unit, and the processing unit retrieves a preset training image processing network to output estimated data comprising positions in one or more first images and generates a three-dimensional model of the limb and limb parameters through triangulation.

Description

Evaluation system for thrombus detection
Technical Field
The invention relates to the technical field of thrombus detection, in particular to an evaluation system for thrombus detection.
Background
At present, patients with past venous thromboembolism history, quadriplegia, hip joint or knee joint replacement, spinal cord injury, cerebral apoplexy and the like in hospitals have more patients, and the patients have symptoms of limb swelling, lower leg itching and pain and the like caused by limb blood stagnation due to long-term bedridden patients, so that limb deep venous thrombosis is easily caused, pulmonary embolism can be caused by deep venous thrombosis embolus falling, and the conditions of serious life and even death of the patients are caused. At present, a clinician commonly adopts a method for frequently observing whether limbs of a patient are swollen or not and the color change condition of skin temperature, wherein the cerebral apoplexy patient often has the bad phenomenon of lower limb swelling due to venous thrombosis, so that the patient needs to be measured on the peripheral diameter of the limbs, the peripheral diameter data obtained by measuring the designated positions of the limbs at different time points are compared, the progress condition of the patient can be effectively mastered, effective preventive care measures are made, and further exacerbation of the illness state is prevented.
When the circumference of the leg of a patient is measured, a measurement point is usually required to be determined, the patella is usually selected as a reference point during clinical measurement, the circumference of the fixed point is measured at a position 10cm-15cm away from the upper and lower parts of the patella, however, the current clinical measurement mode usually adopts visual inspection or flexible rule to directly measure, the accuracy is poor, the operation is inconvenient, and especially the upper and lower parts cannot be accurately symmetrical, the measured data has larger unreliability, and effective reference data cannot be provided for medical staff.
Chinese patent CN108937945A discloses a portable multifunctional limb measurement, recording and evaluation device, which comprises a limb circumference measuring device, a measuring tool and a recording meter clamping box which are sequentially connected, a limb length and angle measuring device which is arranged on the right side of the measuring tool and the recording meter clamping box and is movably connected with the measuring tool and the recording meter clamping box, and a allergy test skin, bedsore, wound length, width and depth measuring device which is arranged on the lower side of the measuring tool and the recording meter clamping box and is connected with the measuring tool and the recording meter clamping box. The multifunctional limb measurement record evaluation device can be used for measuring the peripheral diameter of a limb, the length of the limb, the movement angle of the limb, the skin diameter of an allergic test, the bedsore ulcer surface, the size and the area of wounds, and pupil observation measurement, pain measurement evaluation, bedsore grading evaluation measurement, deep vein thrombosis muscle strength evaluation and the like, and provides accurate basis for medical staff to diagnose diseases and observe treatment. Although the invention combines a plurality of measuring tools to a certain extent and sets an auxiliary positioning measuring structure for marking, the accurate measuring position cannot be selected according to the requirement, and the measurement cannot be quickly and effectively performed, so that an effective data acquisition and storage measuring method for the thickness change of the limbs at the set positions of the upper and lower limbs of the patient patella is required under the condition of not moving the limbs of the patient, and the medical staff can conveniently evaluate whether the patient is thrombus or the risk degree of thrombus onset according to the thickness change of the limbs of the upper and lower sections of the patient patella within a certain time, thereby effectively coping with the condition change of the patient and time.
Furthermore, there are differences in one aspect due to understanding to those skilled in the art; on the other hand, as the inventors studied numerous documents and patents while the present invention was made, the text is not limited to details and contents of all that are listed, but it is by no means the present invention does not have these prior art features, the present invention has all the prior art features, and the applicant remains in the background art to which the rights of the related prior art are added.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an evaluation system for thrombus detection, which at least comprises a light projection unit and an imaging unit, wherein the light projection unit can directionally project marking light on a limb to be detected, the imaging unit can acquire partial limb surface images marked by the light projected by the light projection unit, the light projection unit projects marking light which is convenient for the imaging unit to recognize the contour of the limb in a target space where the limb to be detected is located, the marking light emitted into the target space by the light projection unit can be recorded by a first image acquired by the imaging unit, and a plurality of imaging units are distributed at a set space position relative to the limb to be detected in a mode that a plurality of first images can be synchronously acquired at multiple angles, so that the imaging unit can acquire the marked images of the surface of a designated section comprising the limb to be detected; the imaging unit transmits the acquired image data to a processing unit, which retrieves a preset training image processing network to output estimated data including positions in one or more first images, and generates a three-dimensional model of the limb and limb parameters by triangulation.
According to a preferred embodiment, the imaging unit transmits one or more first images of the same light scattering area of the limb to be measured acquired within the same time period to the processing unit, and the processing unit generates general parameter data of the set part of the limb to be measured by comparing and searching the images in the storage unit, which stores a plurality of reference images and image parameters including the reference points, with the acquired first images.
According to a preferred embodiment, the imaging unit corrects the first image recorded by the imaging unit in such a way that the imaging unit can capture images with the patella of the limb to be measured as a reference point and calibrate the imaging data of the imaging unit in such a way that the first image acquired can record a contour comprising the reference point of the limb and the definition of the limb.
According to a preferred embodiment, the first image recorded by the imaging unit is taken at the time of first recording a measurement; the processing unit generates a three-dimensional model and parameters of an initially set part of the limb based on data of the reference points in the first images and the estimates of the positions in the one or more first images, and in case at least one of the first images is captured, the imaging unit and the light projection unit perform triangulation of parameter values comprising physical geometry of mutual orientation and displacement; the processing unit can transmit the processed limb three-dimensional model and parameters to the analysis platform.
According to a preferred embodiment, the processing unit can segment the limb picture elements related to the target position and other picture elements, so that a plurality of limb picture elements acquired in the same time period can be processed and corresponding limb three-dimensional models can be generated.
According to a preferred embodiment, the light projecting unit and the imaging unit acquire the second image in such a way that the patella of the limb and the upper and lower limbs thereof are acquired in a secondary image during a plurality of time periods during the observation, and the processing unit is capable of performing image processing and image element segmentation on the acquired second image during the same time period, thereby generating a three-dimensional model corresponding to the second image data and parameters of the limb.
According to a preferred embodiment, the processing unit transmits the limb three-dimensional model and parameters acquired by processing the plurality of second images to an analysis platform, the analysis platform can collect the stored limb three-dimensional model and parameters of the first image acquired in the initial period and the limb three-dimensional model and parameters of the second image acquired orderly in a plurality of subsequent periods, and the stored data can be displayed through a display module.
According to a preferred embodiment, the analysis platform can perform a comparison analysis on the limb three-dimensional model and parameters of the sequentially acquired multiple second images and the limb three-dimensional model and parameters of the first image acquired in the initial period and/or the limb three-dimensional model and parameters of the second image acquired in the previous period, so as to determine the change of the upper limb and the lower limb of the patella of the patient.
According to a preferred embodiment, the first image is recorded before or as an initial step of recording the first image and the second image is recorded before or as an initial step of recording the second image: applying a respective scanner setting for recording the first image and setting respective operating conditions for one or both of the light projecting unit and the imaging unit to operate according to the respective operating conditions before recording the second image or as an initial step of recording the second image; wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
The application also provides an evaluation system for thrombus detection, which comprises a light projecting unit, wherein the light projecting unit projects marking light which is convenient for an imaging unit to recognize the outline of the limb into a target space where the limb to be detected is positioned, and the light projecting unit receives a projecting operation instruction and projects the structured light into a target area on the surface of the limb to be detected; the imaging unit is arranged in a mode that a certain distance is formed between the imaging unit and the light projecting unit and a certain included angle is formed between the imaging unit and the light projecting unit, and the imaging unit responds to an instruction of the light projecting unit to finish projection operation to record a first image at least comprising part of the surface of the limb to be measured.
According to a preferred embodiment, the processing unit is capable of preprocessing the limb image data acquired by the imaging unit in the initial state of the patient and transmitting the data to the analysis platform, so that the analysis platform selectively sets the received initial image data and the correspondingly generated limb model as initial criteria for the measurement of the limb size of the patient.
Drawings
Fig. 1 is a logical schematic diagram of a preferred embodiment of an evaluation method of the present invention applied to thrombus detection.
List of reference numerals
1: Light projecting unit 2: imaging unit 3: processing unit
4: Analysis platform 5: display module
Detailed Description
Example 1
An evaluation method for thrombus detection, which is used for measuring the limb size of a part of the lower limb of a patient lying in bed, which is 10cm-15cm above and below the patella, so as to judge whether the patient has the adverse phenomenon of swelling of the lower limb. According to the method, whether the lower limb is enlarged caused by venous thrombosis is judged according to the change of the limb sizes which are sequentially measured in a plurality of time periods with intervals, so that targeted medical treatment can be timely and effectively carried out on the limb deep venous thrombosis which possibly occurs or occurs, the condition deterioration of bedridden patients can be avoided, the condition of the progress of the patient condition can be timely mastered, and further a reasonable and effective treatment scheme can be made according to the condition in stages.
According to a specific embodiment, as shown in fig. 1, in order to effectively monitor whether vein thrombosis of a lower limb occurs in a bedridden patient, a light projection unit 1 and an imaging unit 2 are arranged to take images of limbs in a set target area capable of being projected by structural light, so that ranging light can be directionally projected on the limb to be measured at a position with a certain distance and angle from the limb to be measured by the light projection unit, and the imaging unit 2 can acquire images of the target area marked by the projected light of the light projection unit 1, so as to acquire images with information of the limb to be measured. Preferably, the imaging unit 2 is arranged at a certain distance from the light projecting unit 1 and at a certain angle relative to the light projecting unit 1, so that after the light projecting unit 1 projects the structured light on the target area of the limb to be measured, the imaging unit 2 is started up accordingly and performs image acquisition on the area. In addition, when the light projection-imaging device is moved, the imaging unit 2 can acquire images of different angles and different areas of the same limb part along with the movement of the light projection unit 1, and process a plurality of images acquired in the same time period through a preset training image processing network to obtain estimated data of the position of the limb to be detected in the images, so that the obtained multiple groups of position data generate a three-dimensional model of the limb to be detected and actually acquired limb parameters represented by the model limb by using a triangulation calculation mode, and further obtain the limb dimension of the position of the patella of the limb at a certain distance up and down in a specific time represented by the three-dimensional model. Specifically, the imaging unit 2 transmits one or more images of the same light scattering area in the same time period of the acquired limb to be detected to the processing unit 3, the processing unit 3 performs comparison and search on the images stored in the storage unit 4 containing a plurality of reference images including the reference calibration points and parameters corresponding to the images and the acquired images, so as to generate general parameter data of the set part of the limb to be detected, and the processing unit 3 can further generate a limb three-dimensional model corresponding to the reference images according to the generated general parameter data and the corresponding relation with the reference images with the reference calibration points. Preferably, the imaging unit 2 captures the position of the patella of the limb to be measured as a reference point and corrects the recorded image in such a way that the imaging data of the imaging unit 2 are calibrated, so that the acquired image is recorded with a clear outline containing the reference point of the limb and the limb. When the set limb of the patient is subjected to multiple data acquisition in multiple time periods, the three-dimensional limb model and parameters processed by the processing unit 3 are transmitted to the analysis platform, so that the analysis platform 4 performs data listing and comparison analysis on multiple sets of acquired data, further, the change of the limb size of the position of the patient, which is located at the distance from the upper part to the lower part of the patella of the limb of the patient, can be displayed to medical staff in a certain time, and therefore, the medical staff can conveniently judge whether venous thrombosis occurs or is likely to occur in the bedridden patient, and a treatment scheme suitable for different patients is provided according to the data. In addition, the three-dimensional model obtained by multiple times of acquisition can help medical staff to more intuitively know the actual change of the limb size of a patient through an overlapping comparison mode, can effectively eliminate the condition judgment only by means of the size data at the set position, and reduces the influence on the accuracy of the acquired measurement data caused by deformation of the limb due to possible patient posture, external extrusion and the like.
The first image recorded by the imaging unit 2 is an image taken when a first recorded measurement is made of the patient's limb, and the generated limb three-dimensional model and parameters are also reference data as initial state of the patient. The processing unit 3 generates a three-dimensional model of the set part of the original limb and its parameters based on data of the reference points in the first images and the estimation of the position in the one or more first images, and in case at least one first image is captured, the light projection unit and the imaging unit perform triangulation comprising parameter values of the mutually oriented and displaced physical geometry. The processing unit 3 can divide the limb picture elements related to the target position and other picture elements, so that a plurality of limb picture elements acquired in the same time period can be processed and a corresponding limb three-dimensional model can be generated. The processing unit 3 is capable of transmitting the processed three-dimensional model of the limb and parameters to the analysis platform 4.
The light projection-imaging assembly including the light projection unit 1 and the imaging unit 2 can acquire a second image in such a manner that the patella of the limb and its upper and lower limbs are subjected to secondary image acquisition for a plurality of time periods during observation. The processing unit 3 is capable of performing image processing and image element segmentation on the acquired second image over the same time period, thereby generating a three-dimensional model corresponding to the second image data and parameters of the limb thereof. The limb three-dimensional model and parameters acquired by the plurality of second images processed by the processing unit 3 are transmitted to the analysis platform 4. The analysis platform 4 can collect the stored limb three-dimensional model and parameters of the first image acquired in the initial period and the limb three-dimensional model and parameters of the second image sequentially acquired in the later multiple periods, and the stored data can be displayed through the display module 5. The analysis platform 4 can also perform comparison analysis on the limb three-dimensional model and parameters of the plurality of sequentially acquired second images and the limb three-dimensional model and parameters of the first image acquired in the initial period and/or the limb three-dimensional model and parameters of the second image acquired in the previous period, and further judge the change of the upper limb and the lower limb of the patient patella.
Either before recording the first image or as an initial step of recording the first image and before recording the second image or as an initial step of recording the second image: applying a respective scanner setting for recording the first image and setting respective operating conditions for one or both of the light projecting unit and the imaging unit to operate according to the respective operating conditions before recording the second image or as an initial step of recording the second image; wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
Example 2
An evaluation system capable of measuring whether a bedridden patient has developed a venous thrombosis of a limb includes a light projection-imaging scanning analysis system capable of being integrated in a single device through a housing. The light projecting unit 1 in the system can project the structural light emitted by the light projecting unit to the surface of the set position of the limb to be measured in a mode of being separated from the limb to be measured by a certain angle and a certain interval. The imaging unit 2 is provided at a distance from the light projecting unit 1 in a range of 20mm to 500mm or more. Thus, the imaging unit 2 can move along with the light projecting unit, so that images of different angles and positions of the surface of the limb to be measured can be recorded.
Upon completion of recording the images, a training image processing network configured during training is retrieved to output estimated data for the position in one or more images related to the target position including the surface of the physical object. Then, an estimate of the position in the one or more first images of the target position on the physical object surface is input to a subsequent triangulation. Since the estimation of the position in the one or more first images can be provided with improved accuracy, the training image processing network improves the input of the triangulation, so that a point cloud or three-dimensional model of the surface of the limb to be measured can be accurately calculated. The system can effectively improve the effect of accurately carrying out effective scanning and data acquisition under the condition that the outline of the limb to be detected is not obvious by properly training the image processing network, and can scan the surface of the object with relatively large variability of the surface characteristics.
The system may be operative to receive user input to perform a scanning operation to generate a partial or complete computer-readable point cloud or three-dimensional model of the surface of the physical object upon the aforementioned receiving user input of a manipulation operating condition of one or both of the light projecting unit and the imaging unit. Thus, the user does not have to worry about spending time on her/himself in trial and error the scanning process before satisfactory results are achieved. The training image processing network may have the ability to generalize within and/or beyond its training data, which increases the chance that the user can obtain better scan results than can be achieved in at least the first few attempts to scan a particular limb object. This is relevant because the system scanning process may take a significant amount of time to complete, e.g., 5 to 30 minutes or more. Further, the system may need to receive user input via the user interface to perform digital reconstruction operations including one or more of: retrieving a training image processing network, using the training image processing network, and generating a partial or complete computer-readable point cloud or three-dimensional model of the surface of the physical object. Since both the scanning operation and the digital reconstruction operation may be time consuming, and since the operation may be performed on various hardware components, it is advantageous to perform either operation in response to receiving user input. In either case, the user is typically freed from the very time-consuming task of adjusting the operating conditions by a trial and error process involving both scanning operations (time consuming) and digital reconstruction operations (also time consuming). Preferably, the training image processing network may be configured to suppress the undesired effects of the light scattering region during training, which is a source of erroneously shifting the estimation of the target position. The training image processing network may use a combination of linear and nonlinear operators.
Preferably, the structured light is projected onto the surface of the object to be measured, approximately at a target position gradually displaced on the surface of the object to be measured. This can be achieved by a motorized rotation that supports the angular movement of the light projection unit. The structured light may be aimed at a target location of the object to be measured. When observing an object to be measured upon which the structured light impinges, it can be observed that some of the surface where the structured light appears is focused at a target location (strong light intensity) and substantially gaussian distributed at the approximate target location along a direction orthogonal to the structured light, which may be a line. The light scattering area illuminated by the structured light projected substantially at the target location on the surface of the physical object may appear as one or more regular or irregular illumination areas that are symmetrical or asymmetrical with respect to the target location. The target location may correspond to a geometric center or "center of gravity" of the structured light (e.g., corresponding to the center of a substantially gaussian distribution), but disregard the light scattering region at least to some extent.
Preferably, the light projection unit 1 may include a light source (such as an LED or a LASER) and may include one or more of an optical lens and a prism. The light projecting unit 1 may comprise a "fan laser". The light projecting unit 1 may include a laser emitting a laser beam and a laser line generator lens converting the laser beam into a uniform straight line. The laser line generator lens may be configured as a cylindrical lens or a rod lens to focus the laser beam along an axis to form light rays. The structured light may be configured as 'points', 'arrays of points', 'matrices of points' or 'clouds of points', a single ray, as a plurality of parallel or intersecting lines. The structured light may be, for example, 'white', 'red', 'green', or 'infrared', or a combination thereof. From the perspective of the light projecting unit 1, the structured light configured as a line appears as a line on the object. When the angle of view is different from that of the light projecting unit 1, if the object is curved, the line appears to be curved. The curve can be observed in an image captured by a camera arranged at a distance from the light projection unit 1. The target position corresponds to an edge of the structured light. The edges may be defined according to statistical criteria, for example, corresponding to a light intensity that is approximately half the light intensity at the center of the structured light, which may have a significant gaussian distribution. Another criterion or other criteria for detecting edges may also be used. The target location may correspond to a 'left edge', 'right edge', 'upper edge', 'lower edge', or a combination thereof. An advantage of using target locations at the edges of the structured light is that the resolution of the 3D scan can be improved.
Preferably, the light projection-imaging integrated device of the present system may have one or two wired interfaces (for example according to the USB standard) or wireless interfaces (for example according to the Wi-Fi or bluetooth standard) for transmitting the image sequence to a computer (for example a desktop, a notebook, a tablet or a smart phone) provided with the analysis platform 4. The sequence of images may be consistent with a video format (e.g., a JPEG standard). In retrieving one or more of the training image processing networks, processing the images is performed by a computer and generating a partial or complete computer-readable point cloud or three-dimensional model of the object surface. The first computer may receive the sequence of images via a data communication link. The imaging unit 2 may be configured as a color camera, for example, an RGB camera or a gray-tone camera. The term triangulation should be interpreted to include any type of triangulation, including determining the location of points by forming triangles for them from known points. Triangulation includes, for example, triangulation used in one or more of the following: nuclear geometry, photogrammetry and stereoscopic vision. But is not limited thereto.
Preferably, the image recorded by the imaging unit 2 may be a monochrome image or a color image. The imaging unit 2 may be configured with a camera sensor that outputs images of columns and rows having monochrome or color pixel values in a matrix format. The 2D representation may be in the format of a 2D coordinate list, e.g. referring to the column and row indices of the first image. In some aspects, the 2D representation is obtained with sub-pixel precision. The 2D representation can alternatively or additionally be output in the form of a 2D image, for example a binary image with only two possible pixel values. Thus, the 2D representation is encoded and the 2D representation is available for preparation of input for triangulation. The processing is performed using a training image processing network of the processing unit 3 to obtain a target position in at least one of the first images. The parameter values representing the physical geometry of the light projecting unit 1 and the imaging unit 2 may comprise one or more of mutual orientation and 2D or 3D displacement. The parameter values may comprise a first value that is still fixed at least during the scanning of the specific 3D object and a second value that varies during the scanning of the specific 3D object. The second value may be read by a sensor, sensed or provided by a controller controlling the scanning of the 3D object. The sensor may sense rotation of the light projector or a component thereof.
In performing the division classification of the image element, a picture element that is distinguished from other picture elements as target positions is a target position estimate on the surface of the physical object. A picture element that is distinguishable from other picture elements that are target locations may be encoded with one or more unique values (e.g., as binary values in a binary image). The training image processing network may receive images of the image sequence as input images and may provide a segmentation of certain picture elements from a target location in the output images. The output image may have a higher resolution than the input image, which allows for an estimation of the target position sub-pixel accuracy that is input for triangulation. Preferably, the output image may have a lower resolution, for example, to perform triangulation-related processing. For example, by using bicubic interpolation (e.g., via Bicubic filters), higher resolution images may be generated by upsampling as known in the art. In some embodiments, other types of upsampling are used. The upsampling may be, for example, eight times the upsampling, four times the upsampling, or another upsampling scale. Further, the segmentation may correspond to an estimated target location identified by an image processor performing image processing in response to a human operator controlled parameter, e.g., the parameter relates to one or more of: image filter type, image filter kernel size, and intensity threshold. Parameters controlled by a human operator may be set by the operator during trial and error, wherein the operator uses visual inspection to control parameters for representing accurate segmentation of the target location.
Preferably, the training image processing network is a convolutional neural network, such as a deep convolutional neural network. Convolutional neural networks also provide highly accurate estimates of target position under suboptimal operating conditions of the projector, such as where the light scattering region significantly distorts the light pattern on the surface of the physical object and its spatial definition (e.g., by the physical object's ambient light, cultural and light definitions). Proper training of convolutional neural networks provides better segmentation results for accurate estimation of target position. In some embodiments, the training image processing network includes a support vector machine. The training image processing network may also be a deep convolutional network with u-net including downsampling and upsampling operators. Such training image processing networks provide a good balance between computational endurance and accuracy, let alone a relatively small subset of training data. A convolutional network with u-net is provided when it outputs a segmentation map in the form of an image.
Example 3
An assessment system for the occurrence or presence of venous thrombosis in bedridden patients by measurement of limb dimensions.
After receiving the measurement instruction, the light projecting unit 1 projects the structured light in the position range of 10cm-15cm above and below the patella of the lower limb of the patient, so that the scattering range of the projected light is within the limited range; then, recording partial limb surface images in the structural light scattering range by a camera of the imaging unit 2, so as to obtain a first image, and recording and storing the images of the whole limb surface in the position range of 10cm-15cm above and below the patella of the lower limb by rotating or moving the light projecting unit by the camera of the imaging unit 2 moving along with the first image; secondly, dividing and processing the acquired first image through a processing unit 3 so as to acquire a three-dimensional model of the limb to be detected and size data of the limb of the model; and finally, the processed data information is transmitted to an analysis platform 4 for storage and analysis and is displayed through a display module 5, the display module 5 is also used as a user operation terminal, and an operator can perform parameter setting and related regulation and control operations on the terminal.
After the first image is acquired, medical staff performs multiple image acquisition on the same limb part of the patient for multiple times after a certain time interval, so as to acquire the real-time condition of the limb of the bedridden patient along with the time change, and the second graph acquired for multiple times is compared with the limb size data acquired by the first image, so that whether the patient is likely to or has lower limb venous thrombosis is judged. The image contrast includes overlapping contrast of the three-dimensional model obtained according to image processing and direct contrast of the size data. The processing unit 3 transmits the processed data of the second image to the analysis platform 4, the analysis platform 4 compares the data of the first image acquired initially with the data of the second image acquired for a plurality of times, and judges whether thrombus occurs or not according to the change of the limb size data displayed by the data. Under the condition that the multiple limb data of the patient are gradually enlarged and exceed a preset threshold value, the analysis platform 4 transmits an early warning signal to the display module so as to remind the medical staff of the venous thrombosis of the patient.
It should be noted that the above-described embodiments are exemplary, and that a person skilled in the art, in light of the present disclosure, may devise various solutions that fall within the scope of the present disclosure and fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present description and drawings are illustrative and not limiting to the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (11)

1. Thrombus detection and assessment system for venous thrombus of a lower limb of a bedridden patient, comprising at least a light projection unit (1) and an imaging unit (2) capable of following the light projection unit (1), wherein the light projection unit (1) is capable of directionally projecting marking light rays on a limb to be detected, the imaging unit (2) is capable of acquiring partial limb surface images marked by the light projection unit (1) projecting light rays, characterized in that,
The light projecting unit (1) projects marking light which is convenient for the imaging unit (2) to recognize limb contours into a target space where a part of lower limbs of the patella of a bedridden patient, which is 10 cm-15 cm above and below, are positioned,
The marking light emitted by the light projecting unit (1) into the target space can be recorded by a first image which is acquired by the imaging unit (2) and contains a reference point of a three-dimensional model used for generating an initial limb setting part by taking the position of a patella of a limb to be measured as a reference point, and a plurality of imaging units (2) are distributed at the setting space position of a part of lower limb which is 10 cm-15 cm above and below the patella of a bedridden patient in a mode that the imaging units can synchronously acquire a plurality of first images at a plurality of angles, so that the imaging units (2) can acquire the marked image of the surface of a designated section of the part of lower limb which is 10 cm-15 cm above and below the patella of the bedridden patient;
The imaging unit (2) transmits image data acquired by the imaging unit to the processing unit (3), the processing unit (3) retrieves a preset training image processing network to output estimated data including positions in one or more first images, and generates three-dimensional models and limb parameters of parts of lower limbs 10 cm-15 cm above and below the patella of a bedridden patient for a period of time between a 'first time point for acquiring a first image' and a plurality of 'second time points for acquiring a second image' by triangulation, and the three-dimensional models and limb parameters are generated based on the patella of a limb to be measured in the first image and the estimated data of the positions in one or more of the first images.
2. The thrombus detection and assessment system according to claim 1, wherein said imaging unit (2) transmits one or more first images of the same light scattering area of the acquired limb to be measured over the same time period to the processing unit (3), said processing unit (3) generating the general parameter data of the set part of the limb to be measured by comparing the acquired first images with images stored in the analysis platform (4) containing several reference images and image parameters including reference points.
3. Thrombus detection and assessment system according to claim 2, characterized in that said imaging unit (2) corrects the first image recorded by it in such a way that it can take an image with the patella of the limb to be measured as reference point and calibrate the imaging data of the imaging unit (2) in this way, so that the first image acquired is able to record a contour comprising the reference point of the limb and the definition of the limb.
4. A thrombus detection evaluation system according to claim 3, wherein the first image recorded by the imaging unit (2) is taken at the time of first recording a measurement; -the processing unit (3) performs triangulation of parameter values of the physical geometry of the imaging unit (2) and the light projecting unit (1) including mutual orientation and displacement, in case at least one of the first images is captured; the processing unit (3) is capable of transmitting the processed limb three-dimensional model and parameters to the analysis platform (4).
5. The thrombus detection and assessment system according to claim 2, wherein the processing unit (3) is capable of segmenting limb picture elements related to the target position and other picture elements, so as to process a plurality of limb picture elements acquired in the same time period and generate a corresponding limb three-dimensional model.
6. The thrombus detection and assessment system according to claim 4, wherein the light projection unit (1) and the imaging unit (2) acquire the second image in such a manner that the patella of the limb and the upper and lower limbs thereof are acquired twice in a plurality of time periods during the observation, and the processing unit (3) is capable of performing image processing and image element segmentation on the acquired second image in the same time period, thereby generating a three-dimensional model corresponding to the second image data and parameters of the limb.
7. The thrombus detection and assessment system according to claim 6, wherein said processing unit (3) transmits to an analysis platform (4) a limb three-dimensional model and parameters of a first image acquired during a stored initial period of time, together with a limb three-dimensional model and parameters of a second image acquired sequentially during a subsequent multiple period of time, and the stored data can be presented via a display module (5).
8. The thrombus detection and assessment system according to claim 7, wherein the analysis platform (4) is capable of comparing the sequentially acquired limb three-dimensional models and parameters of the plurality of second images with the limb three-dimensional model and parameters of the first image acquired in the initial period and/or the limb three-dimensional model and parameters of the second image acquired in the previous period, and judging the change of the upper and lower limbs of the patient's patella.
9. The thrombus detection evaluation system of claim 1, wherein either before recording the first image or as an initial step of recording the first image and before recording the second image or as an initial step of recording the second image:
-applying a respective scanner setting for recording the first image and setting respective operating conditions for one or both of the light projecting unit (1) and the imaging unit (2) to operate according to the respective operating conditions before recording the second image or as an initial step of recording the second image;
Wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
10. The thrombus detection and evaluation system for vein thrombosis of the lower limb of the bedridden patient is characterized by comprising a light projection unit (1) for projecting marking light rays which are convenient for an imaging unit (2) to recognize limb contours into a target space where a part of the lower limb of the bedridden patient, wherein the part of the lower limb is located, which is 10cm to 15cm above and below the patella of the bedridden patient, is located, and the light projection unit (1) receives a command of a projection operation and projects structural light into a target area of the surface of the part of the lower limb, which is 10cm to 15cm above and below the patella of the bedridden patient;
The imaging unit (2) is arranged at a certain distance from the light projecting unit (1) and forms a certain included angle relative to the light projecting unit (1), and the imaging unit (2) responds to an instruction of the light projecting unit (1) to finish the projection operation and records a first image at least comprising the position of the patella of the limb to be detected as a reference point for generating a three-dimensional model of an initial limb setting part;
The imaging unit (2) transmits image data acquired by the imaging unit to the processing unit (3), and the processing unit (3) retrieves a preset training image processing network to output estimated data comprising positions in one or more first images, and generates three-dimensional models and limb parameters of partial lower limbs of 10 cm-15 cm above and below the patella of a bedridden patient in a period of time between a 'first time point for acquiring the first images' and a plurality of 'second time points for acquiring the second images' through triangulation, wherein the three-dimensional models and limb parameters are generated based on the patella of a limb to be measured in the first images and the estimated data of the positions in one or more first images.
11. Thrombus detection and assessment system according to claim 10, characterized in that the processing unit (3) is capable of preprocessing the limb image data acquired by said imaging unit (2) in the initial state of the patient and transmitting to the analysis platform (4) such that said analysis platform (4) selectively sets the received initial image data and the correspondingly generated limb model as initial criteria for the measurement of the patient's limb dimensions.
CN202110222887.9A 2021-02-26 2021-02-26 Evaluation system for thrombus detection Active CN113012112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110222887.9A CN113012112B (en) 2021-02-26 2021-02-26 Evaluation system for thrombus detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110222887.9A CN113012112B (en) 2021-02-26 2021-02-26 Evaluation system for thrombus detection

Publications (2)

Publication Number Publication Date
CN113012112A CN113012112A (en) 2021-06-22
CN113012112B true CN113012112B (en) 2024-07-02

Family

ID=76386847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110222887.9A Active CN113012112B (en) 2021-02-26 2021-02-26 Evaluation system for thrombus detection

Country Status (1)

Country Link
CN (1) CN113012112B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114452459B (en) * 2022-03-01 2022-10-18 上海璞慧医疗器械有限公司 Monitoring and early warning system for thrombus aspiration catheter

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110542390A (en) * 2018-05-29 2019-12-06 环球扫描丹麦有限公司 3D object scanning method using structured light

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5497787A (en) * 1994-08-05 1996-03-12 Nemesdy; Gabor Limb monitoring method and associated apparatus
EP0993805A1 (en) * 1998-10-15 2000-04-19 Medical concept Werbeagentur GmbH System providing prophylaxis against thromboembolism
US20150302594A1 (en) * 2013-07-12 2015-10-22 Richard H. Moore System and Method For Object Detection Using Structured Light
US20160235354A1 (en) * 2015-02-12 2016-08-18 Lymphatech, Inc. Methods for detecting, monitoring and treating lymphedema
FR3038215A1 (en) * 2015-07-03 2017-01-06 Univ Montpellier DEVICE FOR BIOMECHANICAL MEASUREMENT OF VESSELS AND VOLUMETRIC ANALYSIS OF MEMBERS.
DE102016118073A1 (en) * 2016-09-26 2018-03-29 Comsecura Ag Measuring device for determining and displaying the exact size of a thrombosis prophylaxis hosiery and method for determining and displaying the size of the thrombosis prophylaxis hosiery
CN212489887U (en) * 2020-04-30 2021-02-09 厦门中翎易优创科技有限公司 Limb swelling monitoring device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110542390A (en) * 2018-05-29 2019-12-06 环球扫描丹麦有限公司 3D object scanning method using structured light

Also Published As

Publication number Publication date
CN113012112A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
EP0541598B1 (en) Method and apparatus for obtaining the topography of an object
US8900146B2 (en) Three-dimensional (3D) ultrasound imaging system for assessing scoliosis
US7284858B2 (en) Method for determining distances in the anterior ocular segment
CN103096785B (en) Ophthalmologic apparatus and control method thereof
US6567682B1 (en) Apparatus and method for lesion feature identification and characterization
US8708490B2 (en) Method and a device for automatically measuring at least one refractive characteristic of both eyes of an individual
CN102469937B (en) Tomography apparatus and control method for same
US20120035469A1 (en) Systems and methods for the measurement of surfaces
US20100091104A1 (en) Systems and methods for the measurement of surfaces
US20120002166A1 (en) Ophthalmologic apparatus and ophthalmologic system
US20150029322A1 (en) Method and computations for calculating an optical axis vector of an imaged eye
US10307058B2 (en) Ophthalmologic apparatus
Barone et al. Assessment of chronic wounds by three-dimensional optical imaging based on integrating geometrical, chromatic, and thermal data
CN113012112B (en) Evaluation system for thrombus detection
CN114189623B (en) Light field-based refraction pattern generation method, device, equipment and storage medium
CN113034608B (en) Corneal surface morphology measuring device and method
JP3711053B2 (en) Line-of-sight measurement device and method, line-of-sight measurement program, and recording medium recording the program
KR101374295B1 (en) Apparatus for ocular and method for measuring treatment position thereof
CN108665471B (en) Method and system for acquiring human back curve based on camera
CN103099622A (en) Body stability evaluation method based on images and device using the same
CN113658243B (en) Fundus three-dimensional model establishing method, fundus camera apparatus, and storage medium
CN112656366B (en) Method and system for measuring pupil size in non-contact manner
EP4185184A1 (en) Method for determining a coronal position of an eye relative to the head
Coghill et al. Stereo vision based optic nerve head 3D reconstruction using a slit lamp fitted with cameras: performance trial with an eye phantom
Kutilek et al. Non-contact method for measurement of head posture by two cameras and calibration means

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant