Nothing Special   »   [go: up one dir, main page]

CN107220933B - Reference line determining method and system - Google Patents

Reference line determining method and system Download PDF

Info

Publication number
CN107220933B
CN107220933B CN201710329970.XA CN201710329970A CN107220933B CN 107220933 B CN107220933 B CN 107220933B CN 201710329970 A CN201710329970 A CN 201710329970A CN 107220933 B CN107220933 B CN 107220933B
Authority
CN
China
Prior art keywords
image
images
reference line
spliced
stitched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710329970.XA
Other languages
Chinese (zh)
Other versions
CN107220933A (en
Inventor
马庆贞
王燕燕
欧阳斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710329970.XA priority Critical patent/CN107220933B/en
Publication of CN107220933A publication Critical patent/CN107220933A/en
Application granted granted Critical
Publication of CN107220933B publication Critical patent/CN107220933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a reference line determining method and a system, wherein the method comprises the following steps: acquiring at least two images to be spliced, wherein the at least two images to be spliced correspond to first spatial position information; determining a spliced image based on the at least two images to be spliced, wherein the spliced image corresponds to second spatial position information; determining a splicing relation between the at least two images to be spliced; and determining at least one reference line based on the splicing relation between the at least two images to be spliced, the first spatial position information and the second spatial position information.

Description

Reference line determining method and system
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of medical images, in particular to a method and a system for automatically and accurately positioning a reference line aiming at a splicing result.
[ background of the invention ]
In the field of medical imaging, reference lines are of great significance for disease discovery, diagnosis and treatment. For example, in a Computed Tomography (CT) image of the whole spine of a human body, a user (e.g., a doctor) can accurately acquire the size, position, and positional relationship with surrounding important tissues of a lesion using a reference line. However, at present, the display of the reference line on the medical image has the problems of inaccurate position and inaccurate adjustment. For example, due to the position change during the stitching process, the position of the reference line cannot be accurately displayed or adjusted on some medical images, such as Magnetic Resonance Imaging (MRI) images of the spine. Such a problem causes a user (e.g., a doctor) to be unable to accurately and quickly acquire the position of the reference line, thereby reducing the efficiency of diagnosis and treatment of a disease by the user (e.g., the doctor). There is therefore a need for a method or system of reference line positioning that enables accurate display and adjustment of the position of the reference line in medical images. The method or the system for positioning the reference line can effectively improve the diagnosis and treatment efficiency of a user (such as a doctor) on diseases.
[ summary of the invention ]
In one aspect of the present application, a method of reference line determination for MRI, CT and/or PET images is provided. The method comprises one or more of the following operations: acquiring at least two images to be spliced, wherein the at least two images to be spliced correspond to first spatial position information; determining a spliced image based on the at least two images to be spliced, wherein the spliced image corresponds to second spatial position information; determining a splicing relation between the at least two images to be spliced; and determining at least one reference line based on the splicing relation between the at least two images to be spliced, the first spatial position information and the second spatial position information.
Optionally, the at least two images to be stitched include at least one of a CT image, an MRI image, or a PET image.
Optionally, the first spatial position information includes at least one of position information and direction information of the at least two images to be stitched; the second spatial position information includes at least one of position information and orientation information of the stitched image.
Optionally, determining the stitching relationship between the at least two images to be stitched includes performing at least one of the following operations on the images to be stitched: translating; rotating; zooming; and shearing. It may be one or more of translating, rotating, scaling, and shearing any one or more of the at least two images to be stitched.
Optionally, the stitching relationship between at least two images to be stitched includes a registration matrix.
Optionally, determining at least one reference line based on the stitching relationship between the at least two images to be stitched comprises: determining an intersection point between the at least two images to be stitched and the stitched image based on the first spatial position information and the second spatial position information; adjusting the intersection point between the at least two images to be spliced and the spliced image based on the splicing relation between the at least two images to be spliced; and determining the at least one reference line based on the adjusted at least two images to be spliced and the intersection point between the spliced images.
Optionally, determining, based on the first spatial position information and the second spatial position information, an intersection point between the image to be stitched and the stitched image includes: and determining the intersection point based on the planes of the at least two images to be spliced and the plane of the spliced image.
Optionally, determining the at least one reference line comprises: determining a first reference line based on the first spatial position information and the second spatial position relationship; performing at least one of translation and rotation of the first reference line based on an objective function to obtain an adjustment matrix; and correcting the first reference line based on the adjusting matrix to obtain a second reference line.
According to some embodiments of the present application, a reference line determination system for a CT image, an MRI image or a PET image is provided. The system may include a computer-readable storage medium configured to store executable modules and a processor capable of executing the executable modules stored by the computer-readable storage medium. The executable modules include: the device comprises an image splicing module and a reference line determining module. The image stitching module is configured to: acquiring at least two images to be spliced, wherein the at least two images to be spliced correspond to first spatial position information; determining a spliced image based on the at least two images to be spliced, wherein the spliced image corresponds to second spatial position information; determining a splicing relation between the at least two images to be spliced; a reference line determining module configured to determine at least one reference line based on a stitching relationship between the at least two images to be stitched, the first spatial position information, and the second spatial position information.
Optionally, the first spatial position information includes at least one of position information and direction information of the at least two images to be stitched; the second spatial position information includes at least one of position information and orientation information of the stitched image.
Optionally, the stitching relationship between at least two images to be stitched includes performing at least one of the following operations on the images to be stitched: translating; rotating; zooming; and shearing.
Optionally, the stitching relationship between at least two images to be stitched includes a registration matrix.
Optionally, determining at least one reference line according to the stitching relationship between the at least two images to be stitched comprises: determining an intersection point between the at least two images to be stitched and the stitched image based on the first spatial position information and the second spatial position information; adjusting the intersection point between the at least two images to be spliced and the spliced image based on the splicing relation between the at least two images to be spliced; and determining the at least one reference line based on the adjusted at least two images to be spliced and the intersection point between the spliced images.
Optionally, the determining, based on the first spatial position information and the second spatial position information, an intersection point between the image to be stitched and the stitched image includes: and determining the intersection point based on the planes of the at least two images to be spliced and the plane of the spliced image.
Optionally, the at least one reference line includes: determining a first reference line based on the first spatial position information and the second spatial position relationship; performing at least one of translation and rotation of the first reference line based on an objective function to obtain an adjustment matrix; and correcting the first reference line based on the adjusting matrix to obtain a second reference line.
[ description of the drawings ]
FIG. 1 is a schematic diagram of a reference line locating system;
FIG. 2 illustrates an exemplary schematic diagram of a processing device, according to some embodiments of the present application;
FIG. 3 illustrates an exemplary flow chart of reference line location according to some embodiments of the present application;
FIG. 4 illustrates an exemplary flow chart for determining reference lines based on spatial location information and stitching relationships, according to some embodiments of the present application;
FIG. 5 illustrates an exemplary flow chart of reference line correction, according to some embodiments of the present application;
FIG. 6 illustrates an exemplary schematic of reference line adjustment, according to some embodiments of the present application;
FIG. 7 illustrates an architecture of a computer device of a processing device that can be used to implement certain systems disclosed herein, according to some embodiments of the present application;
FIG. 8 illustrates a configuration of a mobile device that can be used to implement certain systems disclosed herein, in accordance with some embodiments of the present application; and
fig. 9A-9C illustrate an exemplary schematic of the results of the fine positioning of the reference line, according to some embodiments of the present application.
[ detailed description ] embodiments
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather construed as limited to the embodiments set forth herein.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
FIG. 1 illustrates a schematic diagram of a reference line locating system, according to some embodiments of the present application. The reference line may refer to an intersection between the second medical image and the first medical image displayed on the first medical image. The reference line is significant in describing the position of a lesion, determining a treatment scheme, and selecting intervention treatment of a tumor.
The reference line location system 100 may include one or more imaging devices 110, one or more networks 120, one or more processing devices 130, and one or more databases 140.
The imaging device 110 may scan the detected object to obtain scan data, which may be sent to the processing device 130 via the network 120 for further processing or stored in the database 140. The detection object may include a human body, an animal, and the like. The Imaging device 110 may include, but is not limited to, a Computed Tomography (CT) device, a Magnetic Resonance Imaging (MRI) device, or a Positron Emission Tomography (PET) device.
The processing device 130 may process and analyze the input data (e.g., scan data, scan images obtained by the imaging device 110 and/or stored in the database 140) to generate processing results. For example, the processing device 130 may generate a scan image from the scan data. For another example, the processing device 130 may segment the scanned image to obtain image segmentation results. The scanned image may be a two-dimensional image or a three-dimensional image. The processing device 130 may include a processor and input/output means (not shown). In some embodiments, the processor may be a server or a group of servers. A group of servers may be centralized, such as a data center. A server farm may also be distributed, such as a distributed system. The processor can be one or a combination of a plurality of cloud servers, file servers, database servers, FTP servers, application program servers, proxy servers, mail servers and the like. The server may be local or remote. In some embodiments, the server may access information stored in the database 140 (e.g., medical images stored in the database 140), information in the imaging device 110 (e.g., medical images taken by the imaging device 110). In some embodiments, the input/output device may input data to the processor, or may receive data output by the processor, and represent the output data in the form of numbers, characters, images, sounds, and the like. In some embodiments, the input/output device may include, but is not limited to, one or a combination of input devices, output devices, and the like. The input device may include, but is not limited to, one or a combination of several of a character input device (e.g., keyboard), an optical reading device (e.g., optical mark reader, optical character reader), a graphic input device (e.g., mouse, joystick, light pen), an image input device (e.g., camera, scanner, fax machine), an analog input device (e.g., speech analog to digital conversion recognition system), and the like. The output device may include, but is not limited to, one or a combination of several of a display device, a printing device, a plotter, an image output device, a voice output device, a magnetic recording device, and the like. In some embodiments, the processing device 130 may further include a storage device (not shown) that may store various information, such as programs and data. In some embodiments, intermediate data and/or processing results (e.g., scanned images, image segmentation results, etc.) generated by processing device 130 may be stored in a database 140 and/or a storage device of processing device 130 or may be output via an input/output device.
Data storage device 140 may generally refer to a device having storage functionality. The data storage device 140 may store scan data collected from the imaging device 110 and various data generated in the operation of the processing device 130. The data storage device 140 may be local or remote. The data storage device 140 may include, but is not limited to, one or a combination of hierarchical databases, network databases, relational databases, and the like. The data storage device 140 may digitize the information and store it in a storage device that utilizes electrical, magnetic, or optical means. The database 140 may be used to store various information such as programs and data. The data storage device 140 may be a device that stores information using electric energy, such as various memories, a Random Access Memory (RAM), a Read Only Memory (ROM), and the like. The random access memory comprises one or a combination of several of a decimal count tube, a number selecting tube, a delay line memory, a Williams tube, a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), a thyristor random access memory (T-RAM), a zero-capacitance random access memory (Z-RAM) and the like. The rom includes, but is not limited to, one or a combination of more of bubble memory, magnetic button wire memory, thin film memory, magnetic wire memory, magnetic core memory, magnetic drum memory, optical disk drive, hard disk, magnetic tape, early nonvolatile memory (NVRAM), phase change memory, magnetoresistive random access memory, ferroelectric random access memory, nonvolatile SRAM, flash memory, eeprom, erasable programmable rom, shielded read-only memory, floating gate ram, nano-ram, racetrack memory, variable resistive memory, programmable metallization cell, and the like. The data storage device 140 may be a device that stores information using magnetic energy, such as a hard disk, a floppy disk, a tape, a magnetic core memory, a bubble memory, a flash memory, etc. The data storage device 140 may be a device that stores information optically, such as a CD or DVD, for example. The data storage device 140 may be a device that stores information using magneto-optical means, such as magneto-optical disks and the like. The data storage device 140 may be accessed by one or a combination of random access memory, serial access memory, read-only memory, and the like. The data storage device 140 may be a non-persistent memory or a persistent memory. The above-mentioned storage devices are only examples, and the database that can be used in the reference line locating system 100 is not limited thereto.
The network 120 may be a single network or a combination of networks. Network 120 may include, but is not limited to, one or a combination of local area networks, wide area networks, public networks, private networks, wireless local area networks, virtual networks, metropolitan area networks, public switched telephone networks, and the like. Network 120 may include a variety of network access points, such as wired or wireless access points, base stations, or network switching points, through which data sources connect to network 120 and transmit information through the network.
It should be noted that the above description of the service system is merely for convenience of description and is not intended to limit the present application to the scope of the illustrated embodiments. It will be understood by those skilled in the art that, having the benefit of the teachings of this system, various modifications and changes in form and detail may be made to the application fields in which the above described method and system are implemented in any combination of the various modules or in conjunction with other modules forming a subsystem without departing from this teachings. For example, in some embodiments, database 140 may be a cloud computing platform with data storage functionality, including but not limited to public, private, community, and hybrid clouds, among others. Such variations are within the scope of the present application.
FIG. 2 illustrates an exemplary schematic diagram of a processing device 130, according to some embodiments of the present application. The processing device 130 may include an image stitching module 210, a reference line determination module 220, and a reference line correction module 230. The modules described in fig. 2 may be implemented by a computer such as that in fig. 7 through CPU unit 720. The modules may be directly (and/or indirectly) connected to each other. It should be apparent that the processing device 130 illustrated in fig. 2 represents only some embodiments of the present application, and that modifications, additions, and deletions may be made by those skilled in the art without inventive faculty from the description of fig. 2. For example, two of the modules may be combined into one module, or one of the modules may be divided into two or more modules.
In some embodiments, the image stitching module 210 may generate one or more stitched images based on the images to be stitched. The image stitching module 210 may acquire the images to be stitched from the imaging device 110. The acquired images to be stitched may include, but are not limited to, one or more medical images (e.g., CT images, MRI images, PET images, etc.). The image to be spliced can be a two-dimensional image or a three-dimensional image. The images to be stitched may include, but are not limited to, raw images and/or processed images. The original image may refer to an image (e.g., medical image) directly obtained from the scan data. The processed image may refer to an image obtained by processing an original image. The processing of the original image may include, but is not limited to, one or a combination of image enhancement, image reorganization, three-dimensional reconstruction, image filtering, image encoding (e.g., compression encoding), image format conversion, image rendering, image scaling, and the like.
The number of the acquired images to be spliced can be two or more. In some embodiments, several images to be stitched may be generated based on different angle scans of the same detection object (e.g., a human body, a portion of a human body, etc.). In some embodiments, imaging device 110 may generate multiple image segments based on multiple different angles. Each image segment may comprise at least one image to be stitched and the images to be stitched have related or similar spatial position information. For example, the imaging device 110 may capture the human whole spine based on 3 different angles to acquire 3 image segments: image segment a (e.g., a cervical image segment), image segment B (e.g., a thoracic image segment), and image segment C (e.g., a lumbar image segment). The number of the images to be spliced contained in the image segment A, the image segment B and the image segment C can be the same or different.
The stitched image may be a two-dimensional image or a three-dimensional image. For example, the stitched image may be a two-dimensional image showing the entire spine of the human body. As another example, the stitched image may be a three-dimensional image showing a human liver. In some embodiments, the stitched images may be generated based on images to be stitched of the same modality or images to be stitched of different modalities. For example, the stitched images may be generated based on medical images of different modalities (e.g., MRI images, CT images, PET images, etc.). The medical images of the same modality (or medical images of different modalities) may correspond to the same or different test objects and/or different portions of the same test object. For example, the stitched image of the whole spine of the human body can be formed by stitching an MRI image (such as a cervical vertebra image segment) and a CT image (such as a lumbar vertebra image segment).
In some embodiments, the reference line determination module 220 may determine the reference line. The reference line may be used to show the positional relationship between one reference image and the other image and the current reference image. The reference line may include one or more straight lines, line segments, curves, points, etc. The reference line may contain any number of pixel points. In some embodiments, the reference line is generally represented by a straight line, a dashed line, a line segment, etc., on the reference image. The reference image may be a stitched image or an image to be stitched. For example, in a CT scan of the skull, the imaging device 110 may generate a transverse, sagittal, or coronal image of the skull. The reference line determination module 220 may determine and/or display one or more reference lines on the transverse image, which may be used to show the positional relationship between the coronal image and the current transverse image (as shown in fig. 9A). The transection image may represent an image in which a test object (e.g., a human body) is transected into upper and lower parts perpendicular to a sagittal image and a coronal image when the test object stands on the ground. The sagittal image may represent an image in which a detection target (e.g., a human body) is longitudinally cut into left and right portions in the front-rear direction of the detection target when the detection target stands on the ground. The coronal image may represent an image in which a detection object (e.g., a human body) is cut into front and rear portions in a left-right direction of the detection object when the detection object stands on the ground.
In some embodiments, the reference line determining module 220 may determine and present a plurality of reference lines of the image to be stitched and the current stitched image on the stitched image. For example, on a stitched image of the human whole spine, the reference line determination module 220 may display 5 reference lines. The reference lines can show the positional relationship between the stitched image of the human whole spine and the 5 transverse position images (such as the transverse position images of 5 lumbar vertebrae). The user (such as a doctor) can analyze the accurate position of the transverse position image of a certain lumbar vertebra on the whole vertebra of the human body (such as the transverse position image of the fractured lumbar vertebra belongs to the 2 nd lumbar vertebra) based on the 5 reference lines so as to facilitate diagnosis and subsequent treatment of diseases.
In some embodiments, the reference line correction module 230 may correct the position of the reference line. In some embodiments, the reference line determination module 220 may present the corresponding reference lines of the images to be stitched on the stitched image. In the process of generating the spliced images, the position relation between the images to be spliced can be changed, so that the positions of the corresponding reference lines are inaccurate. For example, the reference line position shown on the stitched image of the human whole spine may exceed the upper and lower range of the spine (e.g., the reference line position is higher than the 1 st cervical vertebra), or the angle between the reference line and the spine does not match the actual scanning angle (e.g., the reference line direction of the transverse image of the spine is parallel or nearly parallel to the spine direction), and so on. The reference line correction module 230 may correct the reference line position manually or automatically.
In some embodiments, a user (e.g., a physician) may input the correction instruction via the input/output component 760 or the input output (I/O) unit 850. The correction instructions may include translation, rotation, addition, deletion, etc. of the reference line. The reference line correction module 230 may obtain a correction instruction and perform corresponding correction on the reference line. In some embodiments, the user (e.g., a physician) may preset rules for reference line calibration (e.g., reference line parallelism spacing greater than 5mm or other threshold, or a particular thickness of a reference line, or a particular length of a reference line, etc.) based on the network 120, the computer 700, or the mobile device 800. Based on the reference line correction rule, the reference line correction module 230 may automatically perform corresponding correction on the reference line.
FIG. 3 illustrates an exemplary flow chart of reference line location according to some embodiments of the present application. The process 300 may be implemented by one or more hardware, software, firmware, etc., and/or combinations thereof. In some embodiments, the flow 300 may be implemented by one or more processing devices (e.g., the processing device 130 shown in fig. 1) and/or computing devices (e.g., the computers shown in fig. 7-8) running the image stitching module 210.
In step 310, the image stitching module 210 may acquire at least two images to be stitched. The images to be stitched may be scanned images. The scan image may include, but is not limited to, a CT image, an MRI image, or a PET image. The scanned image may be a two-dimensional image or a three-dimensional image. The scanned image may include, but is not limited to, a raw image and/or a processed image. The original image may refer to an image directly obtained from the scan data. The processed image may refer to an image obtained by processing an original image. The processing of the original image may include, but is not limited to, one or a combination of image enhancement, image reorganization, three-dimensional reconstruction, image filtering, image encoding (e.g., compression encoding), image format conversion, image rendering, image scaling, and the like.
Image enhancement may mean increasing the contrast of the whole or part of the image. For example, in an MRI image of the spine of a human body, the contrast of the spine and peripheral nerves or soft tissues can be increased, and a one-pass imaging technician or doctor can quickly and easily recognize the boundary of the side of the spine. For another example, for a craniocerebral MRI image, brain tissues of certain focuses (epileptic focuses) or important functional areas can be increased, so that a doctor can conveniently judge the cutting range of the operation, and the lesions are cut off in the maximum range, and meanwhile, the damage to normal brain tissues, especially the brain tissues of the important functional areas, is reduced. In some embodiments, MRI image enhancement may include one or a combination of contrast enhancement, noise removal, background removal, edge sharpening, filtering, and wavelet transformation.
The image reconstruction may represent the generation of an image of an arbitrary slice from an existing MRI scan image. The three-dimensional reconstruction may represent performing three-dimensional reconstruction from a two-dimensional scan image of the liver to obtain a three-dimensional image of the liver. For example, the Image stitching module 210 may convert the MRI Image of the spine from a dicom (digital Imaging and Communications in medicine) format to a Visualization Toolkit grid format (VTI format).
Image coding, which may also be referred to as image compression, may represent an image or information contained in an image with a smaller number of bits under the condition that a certain image quality (e.g., signal-to-noise ratio) is satisfied. Image rendering may represent changing high dimensional information into low dimensional information, e.g., changing three dimensional information into two dimensional information.
In some embodiments, the image stitching module 210 may acquire scan data from the imaging device 110 and reconstruct an original image from the scan data. The method for reconstructing MRI by the image mosaic wood block 210 may include a K-space filling-based MRI reconstruction method or an image domain-based MRI reconstruction method. The K-space filling based MRI reconstruction method may include a half fourier imaging method, a SMASH imaging method, or AUTO-SMASH imaging method, etc. The MRI reconstruction method based on the image domain can utilize certain a priori knowledge in the MRI image to reconstruct the MRI image, and the method can reduce the data scanning time and easily accelerate the MRI imaging process. The image domain-based MRI reconstruction method may reconstruct an image according to sensitivity information different between coils or may perform image reconstruction according to sparsity of an MRI image (for example, reconstruction of an MRI image using a compressed sensing method).
In step 320, the image stitching module 210 may obtain spatial position information of the images to be stitched. The spatial position information may include three-dimensional coordinate information, two-dimensional coordinate information, spatial position information in a specific image format, and the like of the images to be stitched. The spatial position information of the images to be stitched can be obtained from image data in any image format. The image format may include, but is not limited to, DICOM format, VTI format, and the like. The image data may contain one or more logical levels. The image data file may comprise a header and a data set group in a physical structure. The header may include a preamble (e.g., 128 bytes) and a prefix (e.g., 4 bytes) component. The preamble may be of a fixed or non-fixed construction. For example, in some embodiments, the bytes are set to 00H when there is no content, prefixed by a DICM string, to identify DICOM files. The data set holds all the necessary information to manipulate the DICOM file, consisting of a number of data elements. Each data element comprises four fields of Tag, Value replication, Value Length and Value Field, and format definition and content of the element information are stored.
The spatial position information of the images to be stitched obtained by the image stitching module 210 may include other information related to spatial positions, such as position information (e.g., image position (probability) in DICOM) of the images to be stitched, direction information (e.g., image orientation (probability) in DICOM) of the images to be stitched, and the like. The position information of the images to be stitched may include coordinate information related to the images to be stitched (e.g., coordinates of pixels in a first row and a first column (e.g., upper left corner) on the images to be stitched). The coordinate information may comprise one or more coordinates in an arbitrary coordinate system. In some embodiments, the position information of the images to be stitched may comprise one or more coordinates in a specific coordinate system. For example, the spatial position information of the images to be stitched may be represented as O1(x1, y1, z 1).
The direction information of the images to be spliced can comprise any information related to the direction in the images to be spliced. For example, the direction information of the image to be stitched may include direction information (one or more directions, a vector representing the directions, trigonometric function values corresponding to the one or more directions, and the like) corresponding to one or more parts (e.g., a first row of pixels and/or a first column of pixels) of the image to be stitched. For example, the direction information of the images to be stitched may include a direction vector of a first row and a direction vector of a first column on the images to be stitched. For another example, the direction information of the images to be stitched can be represented as a normal vector of the images to be stitched. In some embodiments, the normal vector is a cross product of the direction vector of the first row and the direction vector of the first column.
In some embodiments, the images to be stitched can be expressed as equation (1):
(x-x1)nx+(y-y1)ny+(z-z1)nz=0 (1)
wherein, (x1, y1, z1) is the position information of the images to be stitched; and (nx, ny, nz) is the direction information (such as a normal vector) of the images to be spliced.
In step 330, the image stitching module 210 may determine a stitched image based on the at least two images to be stitched. The images to be spliced can comprise a first image to be spliced, a second image to be spliced and the like. The image stitching module 210 may determine the stitched image by performing operations such as translation, rotation, scaling, or cropping on the image to be stitched. For example, the image stitching module 210 may set any one of the images to be stitched as a fixed reference image (e.g., the first image to be stitched), and perform a translation or a cooperative and/or rotation operation on the other images to be stitched according to the fixed reference image.
In 330, the image stitching module 210 may obtain spatial location information of the stitched image. The spatial position information may include three-dimensional coordinate information, two-dimensional coordinate information, or spatial position information in a specific image format of the stitched image.
The spatial position information of the stitched image obtained by the image stitching module 210 may include position information of the stitched image (e.g., image position information in DICOM), direction information of the stitched image (e.g., image orientation information in DICOM), and the like. The location information of the stitched image may comprise co-ordinate information associated with the stitched image (e.g. the co-ordinates of the pixels in the first row and column (e.g. top left corner) of the stitched image). The coordinate information may comprise one or more coordinates in an arbitrary coordinate system. In some embodiments, the location information of the stitched image may comprise one or more coordinates in a particular coordinate system. For example, the spatial position information of the images to be stitched may be represented as O2(x2, y2, z 2).
The orientation information of the stitched image may comprise any orientation related information in the stitched image. For example, the orientation information of the stitched image may include orientation information (one or more orientations, a vector representing the orientation, trigonometric values corresponding to the one or more orientations, etc.) corresponding to one or more portions (e.g., a first row of pixels and/or a first column of pixels) of the stitched image. For example, the orientation information of the stitched image may include an orientation vector of a first row and an orientation vector of a first column on the stitched image. For another example, the direction information of the stitched image may be represented as a normal vector of the image to be stitched. In some embodiments, the normal vector is a result of cross multiplication of the direction vector of the first row and the direction vector of the first column.
In some embodiments, the spatial location information of the stitched image may be expressed as equation (2):
(x-x2)Nx+(y-y2)Ny+(z-z2)nz=0 (2)
wherein, (x2, y2, z2) is position information of the stitched image; and (Nx, Ny, Nz) is the orientation information of the stitched image.
In some embodiments, the image stitching module 210 may determine the stitched image based on a stitching algorithm. The stitching algorithm may include a region-dependent based stitching algorithm and a feature-based stitching algorithm. The splicing algorithm based on the area correlation can calculate the difference of gray values of images to be spliced by using a least square method based on the gray values of the images to be spliced; judging the similarity degree of the overlapping area of the images to be spliced based on the difference of the gray values; and determining the range and the position of an overlapping area between the images to be spliced based on the similarity, and finally generating a spliced image. The feature-based stitching algorithm may include feature extraction and feature registration. The algorithm of feature matching may include cross correlation, distance transformation, dynamic programming, structure matching, chain code correlation, etc.
In step 340, the reference line determining module 220 may determine a stitching relationship between the images to be stitched (e.g., the first image to be stitched and the second image to be stitched). The stitching relationship may be used to represent a positional relationship between the images to be stitched in determining the stitched images. The position relationship between the images to be spliced can include one, several or a combination of translation, rotation, scaling or shearing. In some embodiments, in a CT scan of the entire spine of the human body, the image stitching module 210 may select the first image to be stitched as a fixed reference image and stitch the images based on a stitching algorithm. For example, the image stitching module 210 may adjust a position of the second image to be stitched with respect to the fixed reference image (the first image to be stitched), and may also adjust a size, a direction, and the like of the second image to be stitched. During the adjustment process, the reference line determining module 220 may determine and/or store a position relationship between the images to be stitched (e.g., between the first image to be stitched and the second image to be stitched). The position relationship between the images to be stitched can be represented as a registration matrix. In some embodiments, the registration matrix may be a 4 x 4 matrix:
Figure BDA0001292223510000151
where the first three rows of the registration matrix may represent affine transformations including rotation, translation, and shear. The fourth row of the matrix may represent a projective transformation.
In 350, the reference line determining module 220 may determine the reference line based on the spatial relative position information between the images to be stitched and the stitching relationship. In some embodiments, the reference line determination module 220 may determine the reference line directly based on the spatial position information of the first stitched image, the spatial position information of the second stitched image, and the stitching relationship of the second stitched image to the first stitched image. For example, the reference line determination module 220 may determine a reference line directly on the first stitched image based on the position information of the first stitched image, the position information of the second stitched image, and a registration matrix (e.g., a 4 x 4 matrix) between the first to-be-stitched image and the second to-be-stitched image, and the reference line may correspond to the second stitched image.
In some embodiments, the reference line determining module 220 may determine an initial reference line on the stitched image based on the spatial position information of the images to be stitched and the spatial position information of the stitched image; and determining a correction reference line based on the initial reference line and the stitching relationship. For example, in a CT image of the whole spine of the human body, the reference line determining module 220 may determine an initial reference line based on three-dimensional coordinates of the images to be stitched (e.g., three-dimensional coordinates of the upper-left pixel) and spatial position information of the stitched images; and further determining the position of a correction reference line or displaying the correction reference line based on the registration matrix between the initial reference line and the image to be spliced.
FIG. 4 illustrates an exemplary flow chart for determining reference lines based on spatial location relationships and stitching relationships, according to some embodiments of the present application. Flow 400 may be implemented by one or more hardware, software, firmware, etc., and/or combinations thereof. In some embodiments, the flow 400 may be implemented by one or more processing devices (e.g., the processing device 130 shown in fig. 1) and/or computing devices (e.g., the computers shown in fig. 7-8) running the image stitching module 210.
In 410, the reference line determination module 220 may determine an intersection point between the image to be stitched and the stitched image. In some embodiments, the reference line determination module 220 may determine an intersection point between the image to be stitched and the stitched image based on the spatial information of the image to be stitched and the spatial information of the stitched image. The intersection point between the image to be spliced and the spliced image can be determined by the plane or plane equation of the spliced image and the image to be spliced. In some embodiments, the reference line determination module 220 may determine the plane or plane equation of the stitched image based on spatial position information (e.g., position information or orientation information) of the stitched image. The reference line determining module 220 may also determine a plane or a plane equation of the image to be stitched based on spatial position information (e.g., position information or direction information) of the image to be stitched. The plane equation of the spliced image and/or the plane equation of the image to be spliced can comprise one or a combination of several of an intercept plane equation, a point-normal plane equation, a general plane equation, a normal plane equation and the like. For example, the reference line determining module 220 may acquire the pixel coordinate O1(x1, y1, z1) of the upper left corner of the images to be stitched based on the position information of the images to be stitched; and acquiring a normal vector (dx, dy, dz) of the image to be spliced based on the direction information of the image to be spliced. The reference line determination module 220 may also determine a point-normal plane equation for the images to be stitched based on the pixel coordinate O1(x1, y1, z1) and the normal vector (dx, dy, dz).
In some embodiments, the intersection point between the image to be stitched and the stitched image may be determined based on a plane-to-plane intersection or a solution to two plane equations. In some embodiments, the reference line correction module 230 may calculate a solution between the plane equation of the image to be stitched and the plane equation of the stitched image, thereby determining an intersection point between the image to be stitched and the stitched image. In some implementations, if there is no intersection between the to-be-stitched image and the stitched image, the reference line correction module 230 may determine that the to-be-stitched image and the stitched image are in a parallel position, or otherwise processed.
In 420, the reference line correction module 230 may adjust the intersection positions based on the stitching relationship between the images to be stitched. In some implementations, the reference line correction module 230 may adjust the intersection points based on the registration relationship between the images to be stitched, obtaining adjusted intersection point positions. For example, the stitching relationship between the first image to be stitched (the fixed reference image) and the second image to be stitched is represented by a registration matrix (e.g., a 4 x 4 matrix) or stored in the database 140. The reference line correction module 230 may calculate a solution between a plane equation corresponding to the second image to be stitched and a plane equation corresponding to the first stitched image, and further determine an intersection point between the first stitched image and the second stitched image. The intersection points of the second image to be stitched are adjusted on the first stitched image based on the registration matrix (e.g., 4 x 4 matrix). For another example, if the second image to be stitched is shifted by 5 units in the positive direction of the X axis relative to the first image to be stitched (fixed reference image) during the stitching process, the reference line correction module 230 may shift the intersection point of the stitched image and the second image to be stitched by 5 units in the negative direction after determining the intersection point, so as to obtain the adjusted intersection point position.
In 430, the reference line correction module 230 may determine the reference line position based on the adjusted intersection position. In some embodiments, the intersection of the image to be stitched and the stitched image may be multiple (e.g., 2). For example, the reference line correction module 230 may determine 2 intersection points on two opposite parallel sides of the plane of the stitched image based on the plane of the image to be stitched and four sides of the plane of the stitched image. The positions of the 2 intersections may be adjusted at 420 to determine 2 adjusted intersection positions. Based on the 2 adjusted intersection points of the stitched image, the reference line correction module 230 may connect the 2 adjusted intersection points to generate a line segment. The line segment can be used as a corresponding reference line of the image to be spliced on the spliced image.
In some embodiments, the intersection of the image to be stitched and the stitched image may be two or more points. For example, the reference line correction module 230 may determine 100 intersection points on the stitched image based on the plane equation of the image to be stitched and the plane equation of the stitched image; and after the 100 intersection points are adjusted, the reference line correction module 230 may keep the partially adjusted intersection points on the stitched image, or may delete the partially adjusted intersection points. The intersection point after the parameter part is adjusted may be based on the length or area of the display reference line on the stitched image, or the like. For example, if the length of the reference line is set to 5cm by the system 100, the reference line correction module 230 may delete intersections beyond 5cm and determine a line segment as the reference line based on the remaining intersections.
FIG. 5 illustrates an exemplary flow chart of reference line correction, according to some embodiments of the present application. Flow 500 may be implemented by one or more hardware, software, firmware, etc., and/or combinations thereof. In some embodiments, the flow 500 may be implemented by one or more processing devices (e.g., the processing device 130 shown in fig. 1) and/or computing devices (e.g., the computers shown in fig. 7-8) running the image stitching module 210.
In 510, the reference line correction module 230 may acquire at least two images to be stitched and determine a stitched image based on the at least two images to be stitched. The images to be stitched may be scanned images. The scan image may include, but is not limited to, a CT image, an MRI image, or a PET image. The scanned image may be a two-dimensional image or a three-dimensional image.
In some embodiments, the images to be stitched may include a plurality of image segments, each image segment may include at least one image to be stitched, and the images to be stitched have the same or similar spatial position information. For example, the imaging device 110 captures the human whole spine based on 3 different angles, and may acquire 3 image segments: image segment a (e.g., cervical image segment), image segment B (e.g., thoracic image segment), and image segment C (e.g., lumbar image segment); and image segment a, image segment B and image segment C comprise 50, 40 and 55 images to be stitched, respectively.
In generating the stitched image, the image stitching module 210 may determine the stitching relationship in units of the image segments. For example, a human whole spine CT image may contain 3 image segments: image segment a, image segment B, and image segment C; each image segment contains at least one image to be stitched, and the images to be stitched in the same image segment have similar spatial position information, for example, the same plane direction or scanning mode (for example, image segment a may be a transverse scan, image segment B may be a sagittal scan, or image segment C may be a coronal scan); and in the process of stitching, the image stitching module 210 may use the image segment a as a fixed reference image segment to perform adjustment such as translation, rotation, scaling or shearing on the image segment B or the image segment C, so as to obtain a stitched image.
In 520, the reference line determining module 220 may determine or/and display the first reference line based on the spatial position relationship of the images to be stitched and the spatial position relationship of the stitched images. The first reference line may be an initial reference line, or may be a reference line adjusted based on the initial reference line and the stitching relationship. In some embodiments, the reference line determining module 220 may determine the plane of the images to be stitched and the plane of the stitched image directly based on the spatial position relationship of the images to be stitched and the spatial position relationship of the stitched image, and determine an intersection point between the planes; the reference line determining module 220 directly determines a line segment or a straight line as the first reference line based on the intersection point. For example, the reference line determining module 220 may determine a first reference line of the 4 th thoracic vertebra transverse position image on the human whole spine stitched image based on a plane of the 4 th thoracic vertebra transverse position image and a plane of the human whole spine stitched image, the first reference line being perpendicular or nearly perpendicular to the human whole spine direction through the 4 th thoracic vertebra on the stitched image.
In some embodiments, the reference line determining module 220 may determine a plane of the images to be stitched and a plane of the stitched image based on the spatial positional relationship of the images to be stitched and the spatial positional relationship of the stitched image, and determine an intersection between the planes, and the initial reference line. The reference line determination module 220 may adjust the position of the initial reference line based on the stitching relationship and determine the position of the first reference line based on the adjusted initial reference line. For example, the reference line determining module 220 may adjust the initial reference lines between the planes of the images to be stitched and the planes of the stitched images based on the registration matrix (e.g., 4 × 4 matrix), generating adjusted initial reference lines; and determining a first reference line based on the adjusted initial reference line.
At 530, the reference line calibration module 230 may perform a calibration operation on the first reference line based on an objective function and obtain an adjustment matrix. The correction operation may include one or a combination of translation, rotation, scaling, or shearing of the first reference line. The adjustment matrix may be used to represent a correction operation on the first reference line. In some embodiments, the adjustment matrix may be stored in database 140 or network 120. In some embodiments, in the three-dimensional space, if the plane of the image to be stitched and the plane of the stitched image are represented by homogeneous coordinates, the adjustment matrix corresponding to the first reference line may include a 4 × 4 matrix:
Figure BDA0001292223510000201
wherein
Figure BDA0001292223510000202
Can be used to represent rotation, translation or shearing, anda41 a42 a43 a44) Can be used to represent projective transformations.
In 540, the reference line correction module 230 may correct the first reference line based on the adjustment matrix and obtain a second reference line. In some embodiments, the reference line correction module 230 may obtain a second reference line based on the adjustment matrix, which may be expressed as equation (3):
Y’=mY (3)
wherein Y' represents a second reference line; y denotes a first reference line which has not undergone a correction operation; and m represents the adjustment matrix in three-dimensional space, which is a 4 x 4 matrix.
In some embodiments, the second reference line may be displayed on the computer 700 or on the mobile device 800 via the network 120. For example, in an MRI scan image of the entire spine of the human body, the processing device 130 may display the stitched image and the second reference line on a mobile device 800 (e.g., a smartphone) of a user (e.g., a surgeon) through the network 120, so that the user can perform remote diagnosis and treatment; for another example, in a CT scan image of the skull of the human body, the processing device 130 may send the stitched image and the second reference line to the cloud via the network 120; and a user (such as a patient) can acquire the spliced image and the second reference line through the cloud and display the spliced image and the second reference line on a smart phone, so that the user can remotely receive image information.
FIG. 6 illustrates an exemplary schematic of reference line adjustment, according to some embodiments of the present application. Flow 600 may be implemented by one or more hardware, software, firmware, etc., and/or combinations thereof. In some embodiments, the flow 600 may be implemented by one or more processing devices (e.g., the processing device 130 shown in fig. 1) and/or computing devices (e.g., the computers shown in fig. 7-8) running the image stitching module 210.
In 610, the processing device 130 may acquire at least two MRI images (e.g., a first MRI image, a second MRI image, etc.). The at least two MRI images may relate to an arbitrary scanned object. For example, the at least two MRI images may include an MRI scan image of the entire spine of the human body, an MRI scan image of the skull, and the like. In some embodiments, an MRI scan image of the entire spine of a human body may include at least 3 image segments: a cervical vertebra image segment, a thoracic vertebra image segment and a lumbar vertebra image segment. The image segments comprise one or more MRI images to be spliced, and the MRI images in each image segment have the same or similar splicing relation in the subsequent splicing process. In some embodiments, the at least two MRI images may be generated by different time scans. For example, the at least two MRI images may include a pre-operative spinal MRI image and a post-operative spinal MRI image.
In 620, the processing device 130 may acquire spatial location information for each MRI image. The spatial position information may include three-dimensional coordinate information, two-dimensional coordinate information, or spatial position information in a specific image format of an image to be MRI. In some embodiments, the spatial position information of the stitched image obtained by the image stitching module 210 may include position information (e.g., image position information in DICOM format), orientation information (e.g., image orientation information in DICOM format), and the like. In some embodiments, the processing device 130 may obtain spatial location information for different image segments. For example, in an MRI scan image of the whole spine of the human body, the processing device 130 may obtain spatial position information of the cervical vertebra image segment, the thoracic vertebra image segment and the lumbar vertebra image segment, so as to determine the position change of each segment after being spliced with respect to a fixed reference segment.
At 630, the processing device 130 may determine a stitched image based on the stitching algorithm. The stitching algorithm may include a region-dependent based stitching algorithm and a feature-based stitching algorithm. The splicing algorithm based on the area correlation can calculate the difference of gray values of images to be spliced by using a least square method based on the gray values of the images to be spliced; judging the similarity degree of the overlapping area of the images to be spliced based on the difference of the gray values; and determining the range and the position of an overlapping area between the images to be spliced based on the similarity, and finally generating a spliced image. The feature-based stitching algorithm may include feature extraction and feature registration. The algorithm of feature matching may include cross correlation, distance transformation, dynamic programming, structure matching, chain code correlation, etc. For example, using a region-dependent stitching algorithm, the processing device 130 may generate a stitched image using at least two MRI images or a plurality of image segments. The stitched image may be a coronal image of the entire spine of the human body.
In 640, the processing device 130 may determine a registration matrix between the MRI images based on the stitched images. In some embodiments, the processing device 130 may determine the stitching relationship of other MRI images or image segments (such as thoracic image segments or lumbar image segments) with a certain MRI image or image segment (such as cervical image segment) as a fixed reference. For example, the processing device 130 may use the first MRI image as a fixed reference. The stitching relationship may be represented as a registration matrix. In some embodiments, during stitching of the three-dimensional space, the processing device 130 may determine a pre-stitched plane and a post-stitched plane for each MRI image and determine the partial registration matrix based on the positional relationship of each MRI image with respect to the fixed reference image. In some embodiments, if the planes of the MRI images are represented using homogeneous coordinates in three-dimensional space, after the stitching operation, the corresponding registration matrix may be represented as a 4 x 4 matrix, as in equation (4):
P’=nP (3)
wherein, P' represents the MRI image after the splicing operation; p represents an MRI image that has not undergone a stitching operation; and n represents the registration matrix in three-dimensional space, being a 4 x 4 matrix.
In 650, the processing device 130 may determine a first reference line based on the registration matrix and the spatial positional relationship. The processing device 130 may determine a first reference line of the MRI image on the full spine mosaic image. In some embodiments, the processing device 130 may determine a plane equation of the MRI image and a plane equation of the full spine stitched image based on the MRI image and the full spine stitched image; and based on the plane equation of the MRI image and the plane equation of the full spine stitched image, the processing device 130 may determine an intersection point between the MRI image and the full spine stitched image. For example, based on spatial position information (such as position information and/or orientation information) of the second MRI image in the cervical spine image segment, a first plane equation is determined; determining a second plane equation based on the spatial position information of the full spine mosaic image; and based on the first planar equation and the second planar equation, the processing device 130 may determine an intersection between the second MRI image and the full spine stitched image in the cervical spine image segment. In some embodiments, based on the intersection point, the processing device 130 may determine the first reference line directly on the full spine stitched image. In some embodiments, based on the intersection and the registration matrix, the processing device 130 may further adjust the intersection location and determine the first intersection.
In 660, the processing device 130 may obtain an adjustment matrix for the first reference line. In some embodiments, a user (e.g., a physician) may manually adjust the first reference line position. The manual adjustment may include translation, rotation, zooming, or cropping of the first reference line, etc. A user (e.g., a physician) may enter instructions for manual adjustment via input/output component 760 in computer 700. The processing device 130 may receive the instruction of manual adjustment to implement manual adjustment of the position of the first reference line. In some embodiments, the manual adjustment may further include one or a combination of deletion, addition, and format setting of the first reference line. For example, the physician may enter instructions for manual adjustment via input/output assembly 760 to delete one or more of the first reference lines. In some embodiments, the processing device 130 may automatically adjust the first reference line position based on a user's setting. For example, the processing device 130 may adjust the position of the first reference line on the stitched image based on an objective function set by a user. In some embodiments, the processing device 130 may determine an adjustment matrix based on the adjustment of the first reference line position. The adjustment matrix may be a 4 x 4 matrix. The 4 x 4 matrix may be stored in the database 140 or in the cloud via the network 120.
In 670, the processing device 130 may update the position of the first reference line based on the adjustment matrix, generating a second reference line. In some embodiments, the user may select whether to perform a reference line update. For example, the user may choose to update the first reference line via the input/output component 760, and the processing device 130 may send the adjustment matrix to the computer 700 or the mobile device 800; and based on the adjustment matrix, the processing device 130 may update the position of the first reference line to generate a second reference line, which may more accurately display the position relationship between the MRI image and the full spine stitching image, so as to facilitate the diagnosis and treatment of the disease for the user (e.g., a doctor).
FIG. 7 is an architecture of a computer device capable of implementing certain systems disclosed herein, according to some embodiments of the present application. The particular system in this embodiment utilizes a functional block diagram to illustrate a hardware platform that contains a user interface. Such a computer may be a general purpose computer or a specific purpose computer. Both computers may implement the particular system of this embodiment. Computer 700 may implement any of the components presently described that provide the information needed for the on-demand service. For example: the processing device 130 can be implemented by a computer such as the computer 700 through its hardware devices, software programs, firmware, and combinations thereof. For convenience, only one computer is depicted in FIG. 7, but the functions of the relevant computer described in this embodiment to provide the information needed for on-demand services can be implemented in a distributed manner by a set of similar platforms, distributing the processing load of the system.
The computer 700 may include a communication port 750 to which a network for enabling data communication is connected. Computer 700 may also include a central processing system (CPU) unit for executing program instructions, comprised of one or more processors. The exemplary computer platform includes an internal communication bus 710, various forms of program storage and data storage units, such as hard disk 770, Read Only Memory (ROM)730, Random Access Memory (RAM)740, various data files that may be configured for computer processing and/or communication use, and possibly program instructions for execution by the CPU. The computer 700 may also include an input/output component 760 for supporting input/output data flow between the computer and other components, such as a user interface 780. Computer 700 may also receive programs and data over a communication network.
FIG. 8 is a block diagram of a mobile device capable of implementing certain systems disclosed herein, according to some embodiments of the present application. In this example, the user device for displaying and interacting with location-related information is a mobile device 800, which may include, but is not limited to, a smart phone, a tablet computer, a music player, a portable game console, a Global Positioning System (GPS) receiver, a wearable computing device (e.g., glasses, watch, etc.), or other forms. The mobile device 800 in this example may include one or more Central Processing Units (CPUs)840, Graphics Processing Units (GPUs) 830, a display unit 820, a memory 860, an antenna 810 (e.g., a wireless communication unit), a storage unit 890, and one or more input/output (I/O) Units 850. Any other suitable components, which may include but are not limited to a system bus or a controller (not shown), may also be included in the mobile device 800. As shown in FIG. 8, a mobile operating system 870, such as iOS, Android, Windows Phone, etc., and one or more applications 880 may be loaded from storage unit 890 into memory 860 and executed by central processor 840. Applications 880 may include a browser or other mobile application suitable for receiving and processing reference line correlations on mobile device 800. User interaction with respect to reference line related information may be obtained and provided to the processing device 130 via the input/output system device 850, and/or other components of the system 100, such as: through network 120.
To implement the various modules, units, and their functionality described in the foregoing disclosure, a computer hardware platform may be used as the hardware platform for one or more of the elements described above (e.g., processing device 130, and/or other components of system 100). The hardware elements, operating systems, and programming languages of such computers are commonplace in nature, and it is assumed that one skilled in the art is sufficiently familiar with these techniques to be able to use the techniques described herein to provide the information needed for on-demand services. A computer containing user interface elements can be used as a Personal Computer (PC) or other type of workstation or terminal device, suitably programmed, and also used as a server. It is believed that one skilled in the art will be familiar with the structure, programming, and general operation of such computer devices and that no additional explanation is required for all figures.
Fig. 9A-9C illustrate an exemplary schematic of the results of the fine positioning of the reference line, according to some embodiments of the present application. FIG. 9A is a transverse position image of the skull; FIG. 9B is a sagittal image of the skull; and FIG. 9C is a coronal image of the skull. In fig. 9A, 910 is a reference line determined by the processing device 130 based on the transverse image of the skull and the coronal image of the skull, the reference line illustrating the intersection of the coronal image of the skull (as shown in fig. 9C) on the transverse image of the skull. In fig. 9B, 920 is a reference line determined by the processing device 130 based on the sagittal image of the skull and the coronal image of the skull, the reference line showing the intersection of the coronal image of the skull (as shown in fig. 9C) on the sagittal image of the skull.
The foregoing outlines various aspects of a method of providing information needed for on-demand services and/or a method of programmatically implementing other steps. Program portions of the technology may be thought of as "products" or "articles of manufacture" in the form of executable code and/or associated data embodied in or carried out by a computer readable medium. Tangible, non-transitory storage media may include memory or storage for use by any computer, processor, or similar device or associated module. Such as various semiconductor memories, tape drives, disk drives, or similar devices capable of providing storage functions for software at any one time.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication enables loading of software from one computer device or processor to another. For example: from a management server or host computer of the on-demand service system to a hardware platform of a computing environment or other computing environment implementing the system or similar functionality related to the information needed to provide the on-demand service. Thus, another medium capable of transferring software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic waves, etc., propagating through cables, optical cables, or the air. The physical medium used for the carrier wave, such as an electric, wireless or optical cable or the like, may also be considered as the medium carrying the software. As used herein, unless limited to a tangible "storage" medium, other terms referring to a computer or machine "readable medium" refer to media that participate in the execution of any instructions by a processor.
Thus, a computer-readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium, or a physical transmission medium. The stable storage media may include: optical or magnetic disks, and other computer or similar devices, capable of implementing the system components described in the figures. Volatile storage media include dynamic memory, such as the main memory of a computer platform. Tangible transmission media include coaxial cables, copper cables, and fiber optics, including the wires that form a bus within a computer system. Carrier wave transmission media may convey electrical, electromagnetic, acoustic, or light wave signals, which may be generated by radio frequency or infrared data communication methods. Common computer-readable media include hard disks, floppy disks, magnetic tape, any other magnetic medium; CD-ROM, DVD-ROM, any other optical medium; punch cards, any other physical storage medium containing a pattern of holes; RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge; a carrier wave transmitting data or instructions, a cable or connection transmitting a carrier wave, any other program code and/or data which can be read by a computer. These computer-readable media can take many forms, and can be embodied in the form of instructions, which can be executed by a processor to cause a process or a result to be delivered to a processor.
Those skilled in the art will appreciate that various modifications and improvements may be made to the disclosure herein. For example, the different system components described above are implemented by hardware devices, but may also be implemented by software solutions only. For example: the system is installed on an existing server. Further, the location information disclosed herein may be provided via a firmware, firmware/software combination, firmware/hardware combination, or hardware/firmware/software combination.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

Claims (10)

1. A reference line determination method, comprising: acquiring at least two images to be spliced, wherein the at least two images to be spliced correspond to first spatial position information;
determining a spliced image based on the at least two images to be spliced, wherein the spliced image corresponds to second spatial position information; determining a splicing relation between the at least two images to be spliced;
determining an intersection point between the at least two images to be spliced and the spliced image based on the first spatial position information and the second spatial position information; adjusting the intersection point between the at least two images to be spliced and the spliced image based on the splicing relation between the at least two images to be spliced; and determining the at least one reference line based on the adjusted at least two images to be spliced and the intersection point between the spliced images.
2. The reference line determining method of claim 1, wherein: the first spatial position information comprises at least one of position information and direction information of the at least two images to be spliced; the second spatial position information includes at least one of position information and orientation information of the stitched image.
3. The reference line determining method of claim 1, wherein: determining the stitching relationship between the at least two images to be stitched comprises performing at least one of the following operations on the images to be stitched: translating; rotating; zooming; and shearing.
4. The reference line determining method of claim 1, wherein: the stitching relationship between the at least two images to be stitched comprises a registration matrix.
5. The reference line determining method of claim 1, wherein: the determining the intersection point between the image to be stitched and the stitched image based on the first spatial position information and the second spatial position information comprises: and determining the intersection point based on the planes of the at least two images to be spliced and the plane of the spliced image.
6. The reference line determining method of claim 1, wherein: the determining at least one reference line comprises: determining a first reference line based on the first spatial position information and the second spatial position relationship; performing at least one of translation and rotation of the first reference line based on an objective function to obtain an adjustment matrix; and correcting the first reference line based on the adjusting matrix to obtain a second reference line.
7. A reference line determination system comprising: a processor and a memory storing computer instructions which, when executed by the processor, implement the steps of the method of any one of claims 1-6.
8. A reference line determination system comprising: a computer-readable storage medium configured to store executable modules, comprising:
an image stitching module configured to: acquiring at least two images to be spliced, wherein the at least two images to be spliced correspond to first spatial position information; determining a spliced image based on the at least two images to be spliced, wherein the spliced image corresponds to second spatial position information; determining a splicing relation between the at least two images to be spliced;
and a reference line determination module configured to determine an intersection point between the at least two images to be stitched and the stitched image based on the first spatial position information and the second spatial position information; adjusting the intersection point between the at least two images to be spliced and the spliced image based on the splicing relation between the at least two images to be spliced; determining the at least one reference line based on the adjusted intersection point between the at least two images to be spliced and the spliced image;
a processor capable of executing the executable modules stored by the computer-readable storage medium.
9. The reference line determining system of claim 8, wherein: the determining, based on the first spatial position information and the second spatial position information, an intersection point between the at least two images to be stitched and the stitched image comprises: and determining the intersection point based on the planes of the at least two images to be spliced and the plane of the spliced image.
10. The reference line determining system of claim 8, wherein: wherein determining the at least one reference line comprises: determining a first reference line based on the first spatial position information and the second spatial position relationship; performing at least one of translation and rotation of the first reference line based on an objective function to obtain an adjustment matrix; and correcting the first reference line based on the adjusting matrix to obtain a second reference line.
CN201710329970.XA 2017-05-11 2017-05-11 Reference line determining method and system Active CN107220933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710329970.XA CN107220933B (en) 2017-05-11 2017-05-11 Reference line determining method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710329970.XA CN107220933B (en) 2017-05-11 2017-05-11 Reference line determining method and system

Publications (2)

Publication Number Publication Date
CN107220933A CN107220933A (en) 2017-09-29
CN107220933B true CN107220933B (en) 2021-09-21

Family

ID=59944109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710329970.XA Active CN107220933B (en) 2017-05-11 2017-05-11 Reference line determining method and system

Country Status (1)

Country Link
CN (1) CN107220933B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11854683B2 (en) 2020-01-06 2023-12-26 Carlsmed, Inc. Patient-specific medical procedures and devices, and associated systems and methods
US12127769B2 (en) 2020-11-20 2024-10-29 Carlsmed, Inc. Patient-specific jig for personalized surgery
US12133803B2 (en) 2019-11-29 2024-11-05 Carlsmed, Inc. Systems and methods for orthopedic implants

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11166764B2 (en) * 2017-07-27 2021-11-09 Carlsmed, Inc. Systems and methods for assisting and augmenting surgical procedures
WO2020056186A1 (en) 2018-09-12 2020-03-19 Carlsmed, Inc. Systems and methods for orthopedic implants
CN109658334B (en) * 2018-12-18 2023-06-30 北京易道博识科技有限公司 Ancient book image stitching method and device
WO2020133468A1 (en) * 2018-12-29 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Methods and systems for camera calibration
CN111460871B (en) * 2019-01-18 2023-12-22 北京市商汤科技开发有限公司 Image processing method and device and storage medium
CN110658674B (en) * 2019-09-16 2022-09-09 忆备缩微科技(北京)有限公司 Method and device for outputting electronic file to microfilm
US11376076B2 (en) 2020-01-06 2022-07-05 Carlsmed, Inc. Patient-specific medical systems, devices, and methods
CN111991015B (en) * 2020-08-13 2024-04-26 上海联影医疗科技股份有限公司 Three-dimensional image stitching method, device, equipment, system and storage medium
CN111563214B (en) * 2020-04-29 2023-05-16 北京字节跳动网络技术有限公司 Reference line processing method and device
CN111753230B (en) * 2020-06-12 2023-12-19 北京字节跳动网络技术有限公司 Reference line processing method and device
CN112492197B (en) * 2020-11-18 2022-01-07 京东科技信息技术有限公司 Image processing method and related equipment
CN114170075A (en) * 2021-10-26 2022-03-11 北京东软医疗设备有限公司 Image splicing method, device and equipment
US11443838B1 (en) 2022-02-23 2022-09-13 Carlsmed, Inc. Non-fungible token systems and methods for storing and accessing healthcare data
US11806241B1 (en) 2022-09-22 2023-11-07 Carlsmed, Inc. System for manufacturing and pre-operative inspecting of patient-specific implants
US11793577B1 (en) 2023-01-27 2023-10-24 Carlsmed, Inc. Techniques to map three-dimensional human anatomy data to two-dimensional human anatomy data
CN116485893B (en) * 2023-04-23 2024-02-23 创新奇智(上海)科技有限公司 Method, system, equipment and medium for measuring article placement position

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2313223T3 (en) * 2005-10-06 2009-03-01 Medcom Gesellschaft Fur Medizinische Bildverarbeitung Mbh RECORD OF 2D ULTRASONID IMAGE DATA AND 3-D PICTURE DATA OF AN OBJECT.
WO2011039672A1 (en) * 2009-09-29 2011-04-07 Koninklijke Philips Electronics N.V. Generating composite medical images
JP5538868B2 (en) * 2009-12-22 2014-07-02 キヤノン株式会社 Image processing apparatus, image processing method and program
NL1038336C2 (en) * 2010-10-27 2012-05-01 Giezer B V METHOD FOR DISPLAYING A DIGITAL 2D REPRESENTATION OF AN OBJECT IN PERSPECTIVE AND SCALE TO A DISPLAYED DIGITAL 2D REPRESENTATION OF A 3D SPACE, FOR A COMPUTER READABLE MEDIA WITH PROGRAM COCODE, A COMPUTER SYSTEM FOR THE IMPROVEMENT SYSTEM OF THE IMPROVEMENT SYSTEM COMPUTER SYSTEM.
CN102436665A (en) * 2011-08-25 2012-05-02 清华大学 Two-dimensional plane representation method for images of alimentary tract
CN103871036B (en) * 2012-12-12 2017-11-28 上海联影医疗科技有限公司 Rapid registering and joining method for three dimensional digital subtraction angiography image
CN104268846B (en) * 2014-09-22 2017-08-22 上海联影医疗科技有限公司 Image split-joint method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12133803B2 (en) 2019-11-29 2024-11-05 Carlsmed, Inc. Systems and methods for orthopedic implants
US11854683B2 (en) 2020-01-06 2023-12-26 Carlsmed, Inc. Patient-specific medical procedures and devices, and associated systems and methods
US12127769B2 (en) 2020-11-20 2024-10-29 Carlsmed, Inc. Patient-specific jig for personalized surgery

Also Published As

Publication number Publication date
CN107220933A (en) 2017-09-29

Similar Documents

Publication Publication Date Title
CN107220933B (en) Reference line determining method and system
US11657509B2 (en) Method for precisely and automatically positioning reference line for integrated images
JP7221421B2 (en) Vertebral localization method, device, device and medium for CT images
CN112967236B (en) Image registration method, device, computer equipment and storage medium
US8150132B2 (en) Image analysis apparatus, image analysis method, and computer-readable recording medium storing image analysis program
CN111063424B (en) Intervertebral disc data processing method and device, electronic equipment and storage medium
US11276490B2 (en) Method and apparatus for classification of lesion based on learning data applying one or more augmentation methods in lesion information augmented patch of medical image
CN110598696B (en) Medical image scanning and positioning method, medical image scanning method and computer equipment
US9053541B2 (en) Image registration
CN113989407B (en) Training method and system for limb part recognition model in CT image
CN110634554A (en) Spine image registration method
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN113129418A (en) Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN113963037A (en) Image registration method and device, computer equipment and storage medium
WO2020173054A1 (en) Vrds 4d medical image processing method and product
US12089976B2 (en) Region correction apparatus, region correction method, and region correction program
Zhang et al. A spine segmentation method under an arbitrary field of view based on 3d swin transformer
CN111613300B (en) Tumor and blood vessel Ai processing method and product based on VRDS 4D medical image
CN111613302A (en) Tumor Ai processing method and product based on VRDS4D medical image
CN113393500B (en) Spine scanning parameter acquisition method, device, equipment and storage medium
US20230064516A1 (en) Method, device, and system for processing medical image
US20230342994A1 (en) Storage medium, image identification method, image identification device
CN116109660A (en) Contour segmentation method and related device based on ultrasonic image
US20180120400A1 (en) Processing mri data
CN117422747A (en) Method, device, equipment and storage medium for determining initial space attitude parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant