US7561725B2 - Image segmentation in a three-dimensional environment - Google Patents
Image segmentation in a three-dimensional environment Download PDFInfo
- Publication number
- US7561725B2 US7561725B2 US10/796,864 US79686404A US7561725B2 US 7561725 B2 US7561725 B2 US 7561725B2 US 79686404 A US79686404 A US 79686404A US 7561725 B2 US7561725 B2 US 7561725B2
- Authority
- US
- United States
- Prior art keywords
- image
- point
- segmentation
- rendered
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000009877 rendering Methods 0.000 claims abstract description 12
- 238000002059 diagnostic imaging Methods 0.000 claims description 4
- 210000000056 organ Anatomy 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims 7
- 238000004873 anchoring Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 16
- 230000000007 visual effect Effects 0.000 description 8
- 238000013461 design Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 241000282414 Homo sapiens Species 0.000 description 4
- 230000009471 action Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 210000000689 upper leg Anatomy 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011960 computer-aided design Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
Definitions
- Medical image data for example, is typically obtained in the form of slices through various types of imaging modalities. These slices are then stacked to form a three-dimensional (“3D”) volume.
- 3D three-dimensional
- An interactive 2D-segmentation tool which has been referred to as an intelligent scissors, facilitates user delineation of 2D regions of interest.
- an intelligent scissors is a very convenient tool for a 2D slice, it is not very appealing if a user needs to go through hundreds or thousands of slices. Sometimes significant features are not revealed in the direction of the original scanned slices, but show more clearly in other directions. It would be beneficial for user interactive efficiency and shape fidelity if the user had tools to outline the significant features along that more revealing direction.
- a method, apparatus and program storage device for image segmentation in a three-dimensional environment are provided for receiving scan data, selecting a viewing vector relative to the scan data, rendering the scan data as a 3D image about the viewing vector, displaying the rendered 3D image, selecting a range of 2D image slices within the 3D image, performing 2D segmentation on the selected slices relative to the viewing vector to obtain a segmented 3D object, and displaying the segmented 3D object.
- the present disclosure teaches an apparatus and method for image segmentation in a three-dimensional environment, in accordance with the following exemplary figures, in which:
- FIG. 1 shows a flowchart for image segmentation in a three-dimensional environment in accordance with an illustrative embodiment of the present disclosure
- FIG. 2 shows a graphical diagram of a 3D rendered octant view in accordance with FIG. 1 ;
- FIG. 3 shows graphical diagrams of 2D segmentation in accordance with FIG. 1 ;
- FIG. 4 shows graphical diagrams of 2D segmentation in accordance with FIG. 1 ;
- FIG. 5 shows a graphical diagram of 2D segmentation in accordance with FIG. 1 ;
- FIG. 6 shows graphical diagrams of 2D segmentation in the optional 21 ⁇ 2 D mode in accordance with FIG. 1 .
- Preferred embodiments of the present disclosure provide a system in which users can apply the livewire tool on 2D slices directly in a 3D environment.
- users can draw regions of interest directly on a 3D view instead of on a separate 2D view, thereby eliminating the burden of switching between a 3D view and 2D views.
- this 3D view several 2D slices can be rendered together so that users can get a better sense of the 3D space.
- This 3D environment is also designed to let users switch the working slice easily so that they can extract the salient features of the data in a reduced amount of time.
- the exemplary description that follows provides a livewire tool, a 3D working environment, and integration of the livewire tool into the 3D working environment.
- the basic working scenario of the livewire tool is that a user selects an anchor point on an image, and then freely moves the mouse to other positions on the image.
- the livewire tool automatically provides a path from the anchor point to the current mouse position. This path generally falls at the borders of regions inside the image, which is usually what user desires, thereby eliminating the need for very careful drawing by the user along the boundaries. Since the path changes according to the movement of the mouse position, it looks like the wire is active. Accordingly, the term “livewire” is coined to describe this tool. Users can fix a portion of the path by clicking a new anchor point. By clicking just a few anchor points, users can outline a very complicated shape from the image.
- the underlying algorithm for livewire finds the boundary definition via dynamic programming, and formulates it as a graph-searching problem.
- the goal is to find the optimal path between a start node and a set of goal nodes.
- the optimality is defined as the minimum cumulative cost path from a start point to a goal point.
- the cumulative cost of a path is the sum of local edge costs on the path.
- a manual overwriting mechanism is integrated into the livewire tool.
- a user can drag the mouse (that is, move the mouse while pressing a mouse button) to draw the curve without livewire activated at some troublesome area.
- the livewire tool is turned back on. In this way, a user can draw a contour very quickly using the combination of livewire and free-hand drawing without the need to press other GUI buttons to switch the automatic mode on and off, for example.
- an exemplary 3D working environment compares favorably with many advanced 3D visualization packages or workstations.
- the 3D working environment has three orthogonal slices that divide the 3D space into eight sections. Hence, this view is referred to as an Octant View (“OV”).
- OV Octant View
- the flowchart 100 includes a start block 110 leading to an input block 112 for receiving scan data.
- the input block 112 leads to a function block 114 for selecting a viewing vector.
- the function block 114 leads to a function block 116 for rendering the scan data as a 3D image about the viewing vector.
- the function block 116 leads to a function block 118 for displaying the rendered 3D image to a user.
- the function block 118 leads to a decision block 120 for changing the viewing vector. If the viewing vector is to be changed, control is passed back to the function block 114 .
- the decision block 120 passes control to a function block 122 for selecting a slice range.
- the function block 122 leads to a function block 124 for performing 2D segmentation on the current slice relative to the viewing vector.
- the function block 124 passes control to a decision block 126 for determining whether there are more slices left in the selected slice range. If there are more slices, control is passed back to the function block 124 . However, if all slices from the selected range have been segmented, control is passed to a function block 128 for displaying the segmented 3D object.
- the function block 128 passes control to an end block 130 .
- an octant view screen shot is indicated generally by the reference numeral 200 .
- the OV provides three orthogonal planes with each one perpendicular to one of the coordinate axis.
- Each plane has three display modes: active, inactive, and invisible.
- active When a plane is in the active mode, the appearance is a gray scale image accompanied by a full bright border.
- the appearance is a transparent image together with a dashed border.
- a plane is in the invisible mode, only the dashed border is shown.
- Each plane can be independently set to one of the three modes. When one plane is in the active mode and another plane is in the inactive mode, a user can still see the part of the active plane that is behind the inactive plane and have a good sense of the 3D space.
- the graphical interface is designed with the octant view occupying most of the window space and three icon images lying on the right hand side.
- the three icon images give user the traditional 2D slice views. They also serve as in-scene controls so that the user can click on an icon image to switch its display mode in the octant view.
- the positions of the icon images can be re-arranged according to application requirements.
- the OV scene is initialized as looking at the origin from a distant point in the first octant.
- the coordinate system is the popular right-handed coordinate system with the Z-axis pointing upwards.
- the whole scene of the OV can be rotated using a mouse-dragging action, for example.
- the rotation angle of the z plane is preferably restricted to ⁇ 60 to +60 degrees. This makes the z-axis always point upwards so that the user will not lose the sense of the 3D view after several rotations. However, the user can still see any parts of a 3D shape since the rotation angles can cover all of the desirable view directions, except small portions from the north and south poles, respectively.
- This design is consistent with many existing Computer Aided Design (“CAD”) packages, which usually provide the user a “ground level” with which to work so that the user will not get disoriented in the 3D space.
- CAD Computer Aided Design
- One of the attractive features of the OV is that it is very easy to move or switch to other planes. This is especially useful in the livewire application.
- Low-resolution transient images are shown along with the dragging so that the user can know roughly where to stop.
- To switch to an image along a different axis the user just needs to click on that plane. No other control needs to be invoked.
- Reduced mouse travel distance Many in-scene controls are provided, which make the interface more intuitive to operate and reduce a great deal of mouse traveling distance. When a user needs to work on an interface for a long period of time, reducing the unnecessary mouse movement increase productivity, reduces distraction, and improves human ergonomics.
- Visual cues can provide the help to match the viewing dimension with the input device dimension. Since the octant view space is a 3D space, it has three degrees of freedom.
- the exemplary input device is an ordinary mouse, which has two degrees of freedom.
- visual cues such as a sliding bar, the movement can be limited to only one dimension, and both input and output devices can be matched to each other.
- visual feedback a user can also know exactly which state she is in and respond accordingly.
- Other visual feedback includes changing cursor shape to reflect the current feature. For example, the cursor can be changed to an hourglass if the user needs to wait. Progressive display results can also provide the user some clues about the status of the process.
- Intuitive control provides the user better manipulation of the interface. When the user wants to move the scene to the right, she simply moves the mouse to the right. When the user wants to move forward, she simply moves the mouse forward from herself. When the user wants to move backward, she will try to pull the mouse back and toward herself.
- the intuitive design matches the user's natural action to her intent. If this match is not met, a user might feel awkward in manipulation. Visual cues also have important impact to intuitive control. For example, if a sliding bar is presented to the user, running from the lower left corner to the upper right corner, the user will intuitively move the mouse in that diagonal fashion. The internal design matches this action to correctly reflect the activity on the screen.
- In-scene control The interface design lets the user do things in one scene, minimizing the travel distances of the mouse, and avoiding buttons or menu-clicking.
- In-scene graphical tabs are included for performing various tasks. When the mouse cursor is moved over a graphical tab, its underlying functionality is changed to that particular graphical tab. This kind of design gives users a better visual realization of the current status of the cursor, and avoids back-and-forth mouse traveling in selecting different modes from a menu or task bar, for example.
- two triangle grabbing tabs are included on two opposite corners of a plane. When a cursor is clicked on the top of one of the tabs, a sliding bar appears, which guides the user by dragging along in restricted directions to translate the plane.
- segmentation tools are usually applied in a 2D image that shows in a rectangle or square region on the screen
- exemplary embodiments of the livewire tool lift this restriction and permit this plane to be in anywhere in the 3D space.
- the livewire tool can be applied directly on a 2D image embedded in a 3D space.
- the basic segmentation working principle stays the same, but an extra transformation is used between the mouse position and the image coordinates. If a 2D image is treated as a plane in the 3D space, the intersection point is found of a ray originating from the mouse position to this plane.
- the calculation of the distance k can be further simplified if the specific application deals only with planes that are orthogonal to one of the coordinate axes.
- the translation from object coordinates to image coordinates is just a scale factor. Since the plane in the octant view coordinate is ranged from 0 to 1 in each side, the tool only needs to multiply the number of pixels along each side to get to the actual location of a pixel in the image.
- the OV interface is developed using OpenGL, as known in the art.
- OpenGL OpenGL
- pixel rectangle transfer operations are used instead of redrawing the whole scene during a livewire drawing session.
- the scene is fixed during the drawing session except for the changes of a piece of the livewire path. Thus, there is no need to redraw everything for this change.
- a copy of the whole image frame is kept before entering the livewire drawing session.
- the bounding box is tracked for the moving piece of the livewire. When an update is needed, the portion of the stored image frame inside the bounding box is copied back to the image frame buffer, the new piece of the livewire is drawn. This approach is more efficient, in general, than redrawing the whole scene.
- a special drawing mode can be achieved in the octant view setting for livewire in two and a half dimensions (“21 ⁇ 2D”).
- An octant view can be set up with two or three orthogonal images. This view is taken as a projected image and used as the working image for the livewire tool. After the boundaries of some regions of the projected image are outlined by the livewire tool, the boundary is back-projected to the 3D space. In this way, livewire draws the contours in different planes in a single drawing session.
- FIG. 3 an example demonstrating a livewire drawing session is indicated generally by the reference numeral 300 .
- the representation of a femur bone is extracted from the data volume. This was no easy task in the past since ordinary image processing techniques, like thresholding or region growing, typically fail in this case. This is because there is no clear boundary at some area(s) where the femur bone is very close to the pelvis.
- FIG. 3( a ) shows a livewire drawing session as started.
- the live path is indicated on the display screen in an orange color.
- FIG. 3( b ) shows that one portion of the boundary is fixed, and this is indicated on the display screen in a purple color.
- FIG. 3( c ) shows that one contour that is completed with just 3 clicks.
- FIG. 3( d ) shows switching to a plane in the other axis.
- the new working plane becomes gray scale and the old working plane becomes transparent.
- FIG. 4 which continues the sequence of FIG. 3 , the continued example demonstrating a livewire drawing session is indicated generally by the reference numeral 400 .
- FIG. 4( e ) shows that the scene is rotated to get a better angle for drawing on the new plane, and a new livewire drawing session is started.
- FIG. 4( f ) shows one portion of the boundary that is fixed in the new plane.
- FIG. 4( g ) shows two completed contours in 3D space after the planes are turned to transparent modes. In this example, the whole process took only about a minute and a half.
- FIG. 4( h ) shows results in a wire-frame format.
- FIG. 5 which continues the sequence of FIGS. 3 and 4 , the continued example demonstrating a livewire drawing session is indicated generally by the reference numeral 500 .
- FIG. 5( i ) shows the session results in a surface shaded format.
- FIG. 6 livewire action in the special 21 ⁇ 2 D mode is indicated generally by the reference numeral 600 .
- FIG. 6( a ) displays a composite scene when the special drawing mode is turned on. Here, three gray scale images intersect each other.
- FIG. 6( b ) shows that the livewire is drawn in this composite image.
- FIG. 6( c ) shows the finished contour.
- FIG. 6( d ) shows the angle turned to reveal that the contour is indeed a 3D contour.
- embodiments of the present disclosure provide powerful 2D segmentation tools for applications in a 3D environment, enabling users to extract significant 3D features and 3D region-of-interest.
- Preferred embodiments can serve as very useful 3D editing tools in clinical applications.
- the user interface is an important part of deploying clinical software, since the interface primarily determines whether the software will be utilized effectively. In designing a clinical user interface, a designer is well advised to consider that the software must conform to the physician's needs, not the other way around. The user interface must be intuitive and easy to learn. It is crucial that the user interface not burden the physician with the details of implementation.
- the teachings of the present disclosure are implemented as a combination of hardware and software.
- the software is preferably implemented as an application program tangibly embodied on a program storage unit.
- the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
- the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces.
- CPU central processing units
- RAM random access memory
- I/O input/output
- the computer platform may also include an operating system and microinstruction code.
- the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
- various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (40)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/796,864 US7561725B2 (en) | 2003-03-12 | 2004-03-09 | Image segmentation in a three-dimensional environment |
PCT/US2004/007301 WO2004081871A1 (en) | 2003-03-12 | 2004-03-11 | Image segmentation in a three-dimensional environment |
DE112004000377T DE112004000377B4 (en) | 2003-03-12 | 2004-03-11 | Method and device Image segmentation in a three-dimensional working environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US45411703P | 2003-03-12 | 2003-03-12 | |
US10/796,864 US7561725B2 (en) | 2003-03-12 | 2004-03-09 | Image segmentation in a three-dimensional environment |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050018902A1 US20050018902A1 (en) | 2005-01-27 |
US7561725B2 true US7561725B2 (en) | 2009-07-14 |
Family
ID=32994543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/796,864 Active 2026-01-15 US7561725B2 (en) | 2003-03-12 | 2004-03-09 | Image segmentation in a three-dimensional environment |
Country Status (3)
Country | Link |
---|---|
US (1) | US7561725B2 (en) |
DE (1) | DE112004000377B4 (en) |
WO (1) | WO2004081871A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050231530A1 (en) * | 2004-04-15 | 2005-10-20 | Cheng-Chung Liang | Interactive 3D data editing via 2D graphical drawing tools |
US20080226169A1 (en) * | 2007-03-13 | 2008-09-18 | Siemens Corporate Research, Inc. | Accelerated image volume segmentation using minimal surfaces given a boundary |
US20080281182A1 (en) * | 2007-05-07 | 2008-11-13 | General Electric Company | Method and apparatus for improving and/or validating 3D segmentations |
US20090153548A1 (en) * | 2007-11-12 | 2009-06-18 | Stein Inge Rabben | Method and system for slice alignment in diagnostic imaging systems |
WO2013040673A1 (en) * | 2011-09-19 | 2013-03-28 | The University Of British Columbia | Method and systems for interactive 3d image segmentation |
US20130104083A1 (en) * | 2011-10-21 | 2013-04-25 | Digital Artforms, Inc. | Systems and methods for human-computer interaction using a two handed interface |
US20130169639A1 (en) * | 2012-01-04 | 2013-07-04 | Feng Shi | System and method for interactive contouring for 3d medical images |
US8754888B2 (en) | 2011-05-16 | 2014-06-17 | General Electric Company | Systems and methods for segmenting three dimensional image volumes |
US9807263B2 (en) * | 2012-10-31 | 2017-10-31 | Conduent Business Services, Llc | Mobile document capture assistance using augmented reality |
US20210304460A1 (en) * | 2020-03-30 | 2021-09-30 | Vieworks Co., Ltd. | Method and appararus for generating synthetic 2d image |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050101864A1 (en) * | 2003-10-23 | 2005-05-12 | Chuan Zheng | Ultrasound diagnostic imaging system and method for 3D qualitative display of 2D border tracings |
JP5022716B2 (en) * | 2007-01-24 | 2012-09-12 | 株式会社東芝 | Ultrasonic diagnostic apparatus and control program for ultrasonic diagnostic apparatus |
US7894625B2 (en) * | 2007-03-22 | 2011-02-22 | The Procter & Gamble Company | Method for developing three dimensional surface patterns for a papermaking belt |
US7978191B2 (en) * | 2007-09-24 | 2011-07-12 | Dolphin Imaging Systems, Llc | System and method for locating anatomies of interest in a 3D volume |
US8111893B2 (en) * | 2009-06-09 | 2012-02-07 | Wisconsin Alumni Research Foundation | Method for dynamic prior image constrained image reconstruction |
US7916828B1 (en) * | 2010-01-06 | 2011-03-29 | General Electric Company | Method for image construction |
US9189890B2 (en) * | 2010-04-15 | 2015-11-17 | Roger Lin | Orientating an oblique plane in a 3D representation |
US9524579B2 (en) * | 2010-04-15 | 2016-12-20 | Roger Lin | Orientating an oblique plane in a 3D representation |
JP5838195B2 (en) | 2010-04-16 | 2016-01-06 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Segmentation of image data |
EP2682068B1 (en) | 2011-03-01 | 2017-11-08 | Dolphin Imaging Systems, LLC | System and method for generating profile change using cephalometric monitoring data |
EP2693976B1 (en) * | 2011-04-07 | 2017-06-14 | Dolphin Imaging Systems, LLC | System and method for three-dimensional maxillofacial surgical simulation and planning |
US8417004B2 (en) | 2011-04-07 | 2013-04-09 | Dolphin Imaging Systems, Llc | System and method for simulated linearization of curved surface |
US8650005B2 (en) | 2011-04-07 | 2014-02-11 | Dolphin Imaging Systems, Llc | System and method for three-dimensional maxillofacial surgical simulation and planning |
CN102968822A (en) * | 2012-08-23 | 2013-03-13 | 华南理工大学 | Three-dimensional medical image segmentation method based on graph theory |
CN102968791B (en) * | 2012-10-26 | 2016-12-21 | 深圳市旭东数字医学影像技术有限公司 | Exchange method that 3 d medical images figure shows and system thereof |
CN104063207B (en) * | 2013-03-20 | 2018-11-09 | 腾讯科技(深圳)有限公司 | A kind of click hit method and system of window logic |
CN110781264A (en) * | 2019-10-28 | 2020-02-11 | 新疆维吾尔自治区测绘科学研究院 | Method for applying massive geographic information data to mobile terminal APP |
EP3866120A1 (en) * | 2020-02-17 | 2021-08-18 | Koninklijke Philips N.V. | Image segmentation system |
CN113920128B (en) * | 2021-09-01 | 2023-02-21 | 北京长木谷医疗科技有限公司 | Knee joint femur tibia segmentation method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5264836A (en) * | 1991-01-15 | 1993-11-23 | Apple Computer, Inc. | Three dimensional cursor |
US5956418A (en) * | 1996-12-10 | 1999-09-21 | Medsim Ltd. | Method of mosaicing ultrasonic volumes for visual simulation |
US6434260B1 (en) * | 1999-07-12 | 2002-08-13 | Biomedicom, Creative Biomedical Computing Ltd. | Facial imaging in utero |
US6468218B1 (en) * | 2001-08-31 | 2002-10-22 | Siemens Medical Systems, Inc. | 3-D ultrasound imaging system and method |
US20030012419A1 (en) * | 1999-10-15 | 2003-01-16 | Vittorio Accomazzi | Perspective with shear warp |
US20030103682A1 (en) * | 2001-12-05 | 2003-06-05 | Microsoft Corporation | Methods and system for providing image object boundary definition by particle filtering |
-
2004
- 2004-03-09 US US10/796,864 patent/US7561725B2/en active Active
- 2004-03-11 WO PCT/US2004/007301 patent/WO2004081871A1/en active Application Filing
- 2004-03-11 DE DE112004000377T patent/DE112004000377B4/en not_active Expired - Lifetime
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5264836A (en) * | 1991-01-15 | 1993-11-23 | Apple Computer, Inc. | Three dimensional cursor |
US5956418A (en) * | 1996-12-10 | 1999-09-21 | Medsim Ltd. | Method of mosaicing ultrasonic volumes for visual simulation |
US6434260B1 (en) * | 1999-07-12 | 2002-08-13 | Biomedicom, Creative Biomedical Computing Ltd. | Facial imaging in utero |
US20030012419A1 (en) * | 1999-10-15 | 2003-01-16 | Vittorio Accomazzi | Perspective with shear warp |
US20040170311A1 (en) * | 1999-10-15 | 2004-09-02 | Vittorio Accomazzi | Perspective with shear warp |
US7031505B2 (en) * | 1999-10-15 | 2006-04-18 | Cedara Software Corp. | Perspective with shear warp |
US6468218B1 (en) * | 2001-08-31 | 2002-10-22 | Siemens Medical Systems, Inc. | 3-D ultrasound imaging system and method |
US20030103682A1 (en) * | 2001-12-05 | 2003-06-05 | Microsoft Corporation | Methods and system for providing image object boundary definition by particle filtering |
Non-Patent Citations (6)
Title |
---|
David T. Gering: "A System for Surgical Planning and Guidance using Image Fusion and Interventional MR" Thesis: Massachusetts Institute of Technology 1999. * |
Liang J et al: "United Snakes Yimage analysis software" Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on Kerkyra, Greece Sep. 20-27, 1999, Los Alamitos, CA, USA, IEEE Comput. Soc, US, Sep. 20, 1999, pp. 933-940, XP010350529 ISBN: 0-7695-0164-8 the whole document. |
Marianna Jakab and Mark Anderson: "A Practical Guide to the 3D Slicer" Online: Feb. 2001, XP002290128 Retrieved from the Internet: URL: http://spl.bwh.harvard.edu:8000/pages/papers/slicer/manual/slicerl-manual.htm> retrieved on Jul. 27, 2004: the whole document. * |
Marianna Jakab and Mark Anderson: "A Practical Guide to the 3D Slicer"'Online! Feb. 2001, XP002290128 Retrieved from the Internet: URL:http://spl .bwh.harvard.edu:8000/pages/ papers/slicer/manual/slicer/-manual.htm> 'retrieved on Jul. 27, 2004! the whole document. |
O'Donnell, L., "Semi-Automatic Medical Image Segmentation", Master's Thesis, MIT, 2001. * |
Screenshot and source code of http://web.archive.org/web/20010920135022/http://slicer.ai.mit.edu/index.html showing link to http://spl.bwh.harvard.edu:8000/pages/papers/slicer/manuals/slicer-manual.htm archived on Sep. 9, 2001. * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050231530A1 (en) * | 2004-04-15 | 2005-10-20 | Cheng-Chung Liang | Interactive 3D data editing via 2D graphical drawing tools |
US7739623B2 (en) * | 2004-04-15 | 2010-06-15 | Edda Technology, Inc. | Interactive 3D data editing via 2D graphical drawing tools |
US20080226169A1 (en) * | 2007-03-13 | 2008-09-18 | Siemens Corporate Research, Inc. | Accelerated image volume segmentation using minimal surfaces given a boundary |
US8270690B2 (en) * | 2007-03-13 | 2012-09-18 | Siemens Aktiengesellschaft | Accelerated image volume segmentation using minimal surfaces given a boundary |
US20080281182A1 (en) * | 2007-05-07 | 2008-11-13 | General Electric Company | Method and apparatus for improving and/or validating 3D segmentations |
US20090153548A1 (en) * | 2007-11-12 | 2009-06-18 | Stein Inge Rabben | Method and system for slice alignment in diagnostic imaging systems |
US8754888B2 (en) | 2011-05-16 | 2014-06-17 | General Electric Company | Systems and methods for segmenting three dimensional image volumes |
WO2013040673A1 (en) * | 2011-09-19 | 2013-03-28 | The University Of British Columbia | Method and systems for interactive 3d image segmentation |
US9317927B2 (en) | 2011-09-19 | 2016-04-19 | Oxipita Inc. | Methods and systems for interactive 3D image segmentation |
US20130104083A1 (en) * | 2011-10-21 | 2013-04-25 | Digital Artforms, Inc. | Systems and methods for human-computer interaction using a two handed interface |
US20130169639A1 (en) * | 2012-01-04 | 2013-07-04 | Feng Shi | System and method for interactive contouring for 3d medical images |
US8970581B2 (en) * | 2012-01-04 | 2015-03-03 | Carestream Health, Inc. | System and method for interactive contouring for 3D medical images |
US9807263B2 (en) * | 2012-10-31 | 2017-10-31 | Conduent Business Services, Llc | Mobile document capture assistance using augmented reality |
US20210304460A1 (en) * | 2020-03-30 | 2021-09-30 | Vieworks Co., Ltd. | Method and appararus for generating synthetic 2d image |
Also Published As
Publication number | Publication date |
---|---|
DE112004000377T5 (en) | 2006-04-27 |
US20050018902A1 (en) | 2005-01-27 |
DE112004000377B4 (en) | 2010-04-15 |
WO2004081871A1 (en) | 2004-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7561725B2 (en) | Image segmentation in a three-dimensional environment | |
US7739623B2 (en) | Interactive 3D data editing via 2D graphical drawing tools | |
US6867787B1 (en) | Character generator and character generating method | |
US20050162445A1 (en) | Method and system for interactive cropping of a graphical object within a containing region | |
US5818455A (en) | Method and apparatus for operating on the model data structure of an image to produce human perceptible output using a viewing operation region having explicit multiple regions | |
JP4510817B2 (en) | User control of 3D volume space crop | |
US6771262B2 (en) | System and method for volume rendering-based segmentation | |
DE69132888T2 (en) | Image display method and system | |
DE69732663T2 (en) | METHOD FOR GENERATING AND CHANGING 3D MODELS AND CORRELATION OF SUCH MODELS WITH 2D PICTURES | |
JP4584575B2 (en) | Image processing method for interacting with 3D surface displayed in 3D image | |
US20060177133A1 (en) | Systems and methods for segmentation of volumetric objects by contour definition using a 2D interface integrated within a 3D virtual environment ("integrated contour editor") | |
JP4199663B2 (en) | Tactile adjustment by visual image in human-computer interface | |
JPH0327949B2 (en) | ||
JP2006512133A (en) | System and method for displaying and comparing 3D models | |
JP3705826B2 (en) | Virtual three-dimensional window display control method | |
US9053574B2 (en) | Calibrated natural size views for visualizations of volumetric data sets | |
US7315304B2 (en) | Multiple volume exploration system and method | |
EP1999717B1 (en) | Systems and methods for interactive definition of regions and volumes of interest | |
US5615317A (en) | Method for blending edges of a geometric object in a computer-aided design system | |
US20220254094A1 (en) | Image rendering apparatus and method | |
Mills et al. | IMEX: A tool for image display and contour management in a windowing environment | |
Wang et al. | Progressive sketching with instant previewing | |
Glueck et al. | Multiscale 3D reference visualization | |
JPS62102369A (en) | Generation of 3-d perspective projection of graphic object | |
Chan | World space user interface for surface pasting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SIEMENS CORPORATE RESEARCH INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIANG, CHENG-CHUNG;REEL/FRAME:015144/0025 Effective date: 20040916 |
|
AS | Assignment |
Owner name: SIEMENS CORPORATE RESEARCH INC., NEW JERSEY Free format text: DOCUMENT RE-RECORDED TO CORRECT AN ERROR CONTAINED IN PROPERTY NUMBER 10/796,854 AND TO CORRECT THE EXECUTION DATE. DOCUMENT PREVIOUSLY RECORDED AT REEL 015144, FRAME 0025.;ASSIGNOR:LIANG, CHENG-CHUNG;REEL/FRAME:016022/0310 Effective date: 20040909 |
|
AS | Assignment |
Owner name: SIEMENS MEDICAL SOLUTIONS USA, INC.,PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS CORPORATE RESEARCH, INC.;REEL/FRAME:016860/0484 Effective date: 20051011 Owner name: SIEMENS MEDICAL SOLUTIONS USA, INC., PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS CORPORATE RESEARCH, INC.;REEL/FRAME:016860/0484 Effective date: 20051011 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: SIEMENS CORPORATION, NEW JERSEY Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:SIEMENS CORPORATE RESEARCH, INC.;SIEMENS CORPORATION;REEL/FRAME:044734/0883 Effective date: 20090902 |
|
AS | Assignment |
Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS CORPORATION;REEL/FRAME:044877/0714 Effective date: 20180130 |
|
AS | Assignment |
Owner name: SIEMENS HEALTHCARE GMBH, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS AKTIENGESELLSCHAFT;REEL/FRAME:044933/0913 Effective date: 20180205 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: SIEMENS HEALTHINEERS AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS HEALTHCARE GMBH;REEL/FRAME:066088/0256 Effective date: 20231219 |