Nothing Special   »   [go: up one dir, main page]

AU4076999A - Method and apparatus for generating 3d models from medical images - Google Patents

Method and apparatus for generating 3d models from medical images

Info

Publication number
AU4076999A
AU4076999A AU40769/99A AU4076999A AU4076999A AU 4076999 A AU4076999 A AU 4076999A AU 40769/99 A AU40769/99 A AU 40769/99A AU 4076999 A AU4076999 A AU 4076999A AU 4076999 A AU4076999 A AU 4076999A
Authority
AU
Australia
Prior art keywords
patient
images
tooth
model
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU40769/99A
Inventor
William E. Harrell Jr.
David C. Hatcher
Hassan Mostafavi
Charles Palm
Terry J. Sorensen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ACUSCAPE INTERNATIONAL Inc
Original Assignee
ACUSCAPE INTERNATIONAL Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ACUSCAPE INTERNATIONAL Inc filed Critical ACUSCAPE INTERNATIONAL Inc
Publication of AU4076999A publication Critical patent/AU4076999A/en
Abandoned legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Vascular Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Description

WO 99/59106 PCT/US99/10566 Method and Apparatus for Generating 3D Models from Medical Images Copyright Notice A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the 5 facsimile reproduction by any one of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. The Field of the Invention This invention relates to the field of medical imaging. In particular, the 10 invention relates to the generation and use of three-dimensional medical images and models. Background of the Invention Efforts to represent images in three-dimensional form go back to the invention of the stereoscope in the 1800's. These attempts continued through 15 the 1950's when 3-D movies (characterized by the red and blue 3-D glasses which served as color filters to separate left and right images) were briefly popular. With the advent of modern computer technology, some companies have engaged in considerable efforts to capture and reproduce three-dimensional information. 20 Typically, three-dimensional information about a scene has been represented using a number of selected points and storing information about each point such as its color, intensity and distance from the camera. For example, a Cyberware -1 Q1 I1CTITI ITC QWIr-IT 1111 H 9PM WO 99/59106 PCT/US99/10566 scanner generates such information by rotating a camera around an object to be modeled and capturing that information at particular points. For a high resolution model, many rotations about the object are required to capture the model information. The vertical displacement of the plane of rotation is 5 decremented by a resolution amount after each rotation. Such models, while accurate, result in a huge number of points at which information must be captured and represented. Some portions of objects, however, do not require such high resolution. As a result, model builders will frequently manually remove unneeded vertices to 10 simplify the processing required when displaying and manipulating a three dimensional model. Such three-dimensional models are typically rendered as wire frames. That is, a plurality of points are identified, corresponding to the points at which image information is captured and the points are displayed together with lines 15 connecting each point with adjacent points. When models are displayed in this manner, they are typically called wireframes because the lines between the points appear to constitute a wire mesh. The individual points in such a wireframe are frequently called vertices because they frequently appear at the vertex of the angles formed by lines going to adjacent points. 20 Software is known for constructing and manipulating three-dimensional models. An example of such software is 3-D Studio MaxTM by Autodesk, Inc. Typically, such software packages have the capability to render, or provide a surface texture over the surface of the wireframe model. -2- WO 99/59106 PCT/US99/10566 A number of attempts have been made to standardize the representation of three-dimensional information. Of current popularity is the Virtual Reality Mark-up Language (VRML) found with some frequency in an Internet context. Wireframe models are commercially available from a variety of sources. 5 In the medical area, magnetic resonance imaging (MRI), and other imaging technologies, can accurately display two-dimensional slices of a patient. The slices include extremely large amounts of data. Some programs allow doctors to construct 3-D models from these 2-D images. But this process requires a large amount of data processing. The resulting 3-D models are accurate in that they 10 describe exactly what is in the images, but the models do not have any tie to human anatomy. For example, an irregular shape in a 3-D model of a skull may be a tumor, but the system does not relate this additional information to the shape. Even worse, the 3-D model does not even have information indicating that the shape is within a skull. The doctor is responsible for making these 15 determinations. Thus, the 3-D models have limited use. Another problem with many of these systems is that they do not allow the doctor to build the model using combined or a range of imaging technologies (e.g., x-rays, MRIs and photographs). Thus, the models are typically defined using only one imaging technology. 20 A number of problems exist with the existing technology. First, a high degree of technical expertise is required to create and manipulate three dimensional models. Further, computer processing time is significant and, as a result, special purpose machines, such as those produced by Silicon Graphics, Inc. are commonly used to generate and manipulate three-dimensional models. -3- WO 99/59106 PCT/US99/10566 The user interfaces of available commercial software for dealing with three dimensional models are highly technical and generally unsuited for use by a person whose specialty is not in the computer sciences. Also, the 3-D models do not are not related to medical information about a patient (e.g., a shape in a 3-D 5 model is only a shape, there is no information that the shape is a tumor or body part). Also, some technologies do not allow doctors to build models from different types of images (e.g., x-rays, MRI's, and photographs). Therefore what is desired is an improved 3-D modeling and generation system. -4- WO 99/59106 PCT/US99/10566 Summary of the Invention The following summarizes various embodiments of the invention. One aspect of the invention is directed to providing a generic software tool for creating and manipulating three-dimensional models for medical applications. 5 In one embodiment, a number of modules are used to achieve this result. These modules include a Sculptor module, a Clinician module and an Executor module. The Sculptor module maps all acquired imaging, including those from disparate sources, into a single 3D matrix or database. The images are generated 10 using a number of different techniques, such as optical and x-ray. The Sculptor allows a user to identify the location of different anatomical points in each of the images. Thus, the Sculptor allows a user to relate different anatomical points to each other in a 3-D space and also relate the points to the images. The Clinician/Consultant module uses the related points to modify or 15 customize a stock model (e.g., a standard anatomical 3-D model). The customized model that is created corresponds to a 3-D model of the patient's anatomy. The model is "smart" in that when certain changes are made to the dot or vertex location of the model, the remainder of the model can be adjusted or morphed to make corresponding changes. Additionally, objects in the model 20 know what part of the anatomy they represent. For example, an object representing the patient's tooth is associated with data indicating that the object is a tooth. This allows for analysis of a patient's anatomy to be performed automatically. The Clinician/Consultant is a database query tool that allows for -5- WO 99/59106 PCT/US99/10566 display or visualization of the anatomy and function, manipulation of objects for treatment planning and model analyses. A third module, called the Executor, is a database that provides overall system file and image management and coordinates the Sculptor module and the 5 Clinician/Consultant modules. The various features of the invention are illustrated in the context of an application to Orthodontics. In this application, the stock model is a model of the skull and associated facial soft tissues, including the upper and lower jaws. In the examples shown, this model has approximately 300 objects which can be 10 manipulated in the Clinician module to facilitate the kinds of tasks routinely undertaken by an orthodontist. Some embodiments of the invention include the functionality of some or all of the above modules. For example, in some embodiments, only a subset of the functions performed by the Sculptor are included (e.g., the ability to define 15 related points in multiple images). Other embodiments of the invention include a method and apparatus for performing medical analysis of the patient's 3-D model. Although many details have been included in the description and the figures, the invention is defined by the scope of the claims. Only limitations found in 20 those claims apply to the invention. -6- WO 99/59106 PCT/US99/10566 Brief Description of the Drawings The figures illustrate the invention by way of example, and not limitation. Like references indicate similar elements. Figure 1 illustrates a computer system including one embodiment of the 5 invention. Figure 2 illustrates an architecture of the software used in one embodiment of the invention. Figure 3 illustrates capturing images for use in the system. Figure 4 and Figure 5 illustrates the calibration frame. 10 Figure 6 illustrates an example method calibrating images, generating a patient specific model, and performing analysis from the calibrated images and the patient specific model. Figure 7 through Figure 24 illustrates user interfaces for a sculpture application. 15 Figure 25 through Figure 40 illustrates user interfaces for a clinician application. The Description Definitions The following definitions will be helpful in understanding the description. 20 Computer - is any computing device (e.g., PC compatible computer, Unix workstation, handheld device etc.). Generally, a computer includes a processor and a memory. A computer can include a network of computers. -7- WO 99/59106 PCT/US99/10566 Handheld Device (or Palmtop Computer)- a computer with a smaller form factor than a desktop computer or a laptop computer. Examples of a handheld device include the Palm IIIM handheld computer and Microsoft's palm sized computers. 5 User - any end user who would normally wish to retrieve information from the World Wide Web. Internet - is a collection of information stored in computers physically located throughout the world. Much of the information on the Internet is organized onto electronic pages. Users typically bring one page to their 10 computer screen, discover its contents, and have the option of bringing more pages of information. Client - a computer used by the user to make a query. Server - a computer that supplies information in response to a query, or performs intermediary tasks between a client and another server. 15 World Wide Web (or Web or web) - is one aspect of the Internet that supports client and server computers handling multimedia pages. Clients use software, such as the Netscape Communicator® browser, to view pages. Server computers use server software to maintain pages for clients to access. Program - a sequence of instructions that can be executed by a computer. A 20 program can include other programs. A program can include only one instruction. Application - is a program. The detailed descriptions which follow may be presented in terms of program procedures executed on a computer or network of computers. These procedural -8- WO 99/59106 PCT/US99/10566 descriptions and representations are the means used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A procedure, program or application, is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. These steps are 5 those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, 10 terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations 15 performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein which form part of the present invention; the operations are machine operations. Useful machines for performing the operation of the present invention include general purpose digital computers or similar devices. 20 The present invention also relates to apparatus for performing these operations. This apparatus may be specially constructed for the required purpose or it may comprise a general purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The procedures presented herein are not inherently related to a particular computer or other -9- WO 99/59106 PCT/US99/10566 apparatus. Various general purpose machines may be used with programs written in accordance with the teachings herein, or it may prove more convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the 5 description given. System Figure 1 illustrates a computer 110 that can be used to carry out the invention. The following paragraphs first list the elements of Figure 1, then describe how they are connected, and then define those elements. 10 Figure 1 includes a computer 110, a sculptor 115, a clinician/consultant 125, and an executor 135. The sculptor 115 includes a display of a user interface having a number of patient images 150 that also show a calibration frame 140. The clinician/consultant 125 includes a similar user interface that includes a a view of a patient specific model 160 and an analysis window 127. The analysis 15 window 127 includes an example analysis 170. The executor 135 includes image data 137 and patient model data 139. This paragraph describes how the elements of Figure 1 are connected. The sculptor 115 and clinician/consultant 125 communicate with the executor 135. The sculptor 115 and the clinician/consultant 125 can extract and manipulate 20 information from the image data 137 and the patient model data 139 through the executor 135. - 10- WO 99/59106 PCT/US99/10566 The following paragraphs describe the elements of Figure 1 in greater detail. How these elements are used to generate a 3D model of a patient's anatomy is describe in relation to Figure 2. The computer 110 represents a computer system upon which the 5 sculptor 115, the clinician/consultant 125, and the executor 135 can execute. The computer 110 is representative of a standard personal computer such as is available from Dell Computers, Inc. Of course any number of different types of computers could be used as the computer 110. What is important is that the computer 110 some sort of processor and some memory. 10 As an alternative to the system of Figure 1, the sculptor 115 and the executor 135 may run on one computer at one time. While at another time, the clinician/consultant 125 and the executor 135 can run on another computer at another time. Alternatively, all three programs can run on different computers. The computers can be linked together by a network such as the Internet. 15 The sculptor 115 represents a computer program in which a number of different types of patient images 150 can be calibrated using the images of the calibration frame 140. Note that the patient images 150 are from multiple sources. In particular, in this example, an x-ray image and two photographs are shown. The sculptor 115 allows a technician to calibrate the images and identify 20 a number of anatomical locations in the images. The patient images 150 can be extracted from the image data 137. The image data 137 can be imported from an external source either by transmission over a network or by scanning of x-ray or optical images, for example. Other -11- WO 99/59106 PCT/US99/10566 embodiments can include direct capture of x-ray images or other types of media images. Alternatively, the image data 137 need not be retrieved from the executor 135. The image data may be directly imported into the sculptor 115 5 and then, later on, possibly be stored in the image data 137. The calibration frame 140 is an apparatus that includes a number of calibration targets that can be seen in the patient images 150. The calibration frame 140 is worn by the patient during the capturing of the patient images. The patient model data 139 represents the data generated by the sculptor 115 10 that can be used to morph the patient specific model 160 and any other information that would be important to patient records. This output of the sculptor 115 can be included the form of two transport files, the (.scl file and .cln file). The executor passes these files to the clinician/consultant. Turning to the clinician/consultant 125, the data from the sculptor 115 is 15 used by the clinician/consultant 125 to morph a stock anatomy model into a patient specific model 160. The stock anatomy model is 3D model of a standard person's anatomy (e.g., a skull possibly having flesh). The clinician/consultant 125 morphs the stock model into the patient specific model, allows users visualize what a patient's anatomy looks like. Simulations of treatment plans can 20 be shown in the clinician/consultant 125. Also, because the patient specific data 137 defines the relative location of a number of parts of the patient's anatomy, the clinician/consultant 125 can be used to perform various types of analyses on the patient's anatomy. The results of these analyses can be then displayed on the patient images and as well as in the example analysis window 127. - 12- WO 99/59106 PCT/US99/10566 In the example, which is described throughout, the particular model used will be a stock model of a human skull. The human skull can be used by an orthodontist in planning for, and carrying out, a treatment plan for a particular patient. In this example, the model will have a number of objects including 5 objects corresponding to each of the patient's teeth, the jaw, and other elements of the skull. Importantly, each of these objects can be manipulated individually in the clinician/consultant 125. An example of a stock model that may be used is one from Viewpoint Data Labs which is specifically created for orthodontic applications. A full custom 10 stock model can also be used. The stock model represents the average structure of a piece of anatomy. The Executor (database) will compile normative stock models to match patient demographics of age, race, sex and body type. The stock model has a coordinate system where each point is referenced to another point within itself The information retrieved in the sculptor 115 allows that 15 stock model to be morphed according to the dimensions and measurements from the sculptor 115. Examples of the individual objects available in the generic stock model, and the resulting patient specific model 160, include individual teeth, the jaw, and the skull. Each of these objects has a separate coordinate system which is referenced to the coordinate system in the patient specific 20 model 160. In this way, when a particular object is selected, one may manipulate that object and change its relationship to the global reference system of the patient specific model 160. - 13 - WO 99/59106 PCT/US99/10566 Other aspects of the invention are some of the new user interfaces features presented in the sculptor 115 and/or the clinician/consultant 125. These new user interface features will be described in greater detail below. Figure 2 illustrates the various responsibilities of each of the three programs 5 of Figure 1. The sculptor 115 is responsible for input/output control of the patient images 150. The sculptor 115 allows for a calibration between the various images. 3D measurements of various locations defined in the calibration process can then be determined. The sculptor 115 includes a graphical user interface for 10 performing the various features of the sculptor 115. The viewer supports the viewing of 3D models (useful where a piece of anatomy needs a more detailed identification). The model matching allows a user to match portions of the stock model to points on one or more patient images 150. In addition, model matching includes 15 the ability to spatially match or register two models of the same patient at different points in time. Thus, areas of a model that are not already predefined in the sculptor 115 can be defined. These new locations can then be used for more accurate morphing process of the particular part of the anatomy of interest, will facilitate a morphological comparison of two patient specific models and will 20 facilitate the comparison of patient specific models to normative data. For example, is one part of a patient's anatomy requires specialized treatment, a more detailed patient specific model 160 may be desired. In such a case, the sculptor. 115 allows the user to identify the location of previously undefined points of the stock model in the patient images 150. - 14- WO 99/59106 PCT/US99/10566 The executor 135 takes responsibility for the database storage and transferring of the image data 137 and the patient model data 139. The executor 135 includes Internet access for communicating with one or more sculptors 115 and one or more clinician/consultants 125. The executor 135 also 5 has encryption capabilities to protect the security of the information stored by the executor 135. The clinician/consultant 125 includes the following functions. Diagnosis, treatment planning, predictions, analyses, and metrics, are all examples of the type of functions that can be performed on the patient specific model 160 and 10 patient model data 139. Examples of these areas are described in greater detail below. The clinician/consultant 125 also keeps track of the stock objects, or stock models, that may be used in the morphing processes. The clinician/consultant 125 includes a graphical user interface and viewer for viewing patient information, the 3D model, analysis, etc. 15 The morph editor is used to modify any morphing that is done to generate the patient specific model 160. The simulator simulates the motion of objects within the patient specific model 160. The simulator could also be used to simulate the predicted motion of objects in the patient specific model 160. Thus, the simulator can be used to 20 simulate the movement of a jaw, or treatments such as the straightening of teeth. -15- WO 99/59106 PCT/US99/10566 Image Capture Example Figure 3 illustrates example relationships between a patient, a camera and other imaging technologies, for the purpose of capturing images. The capture images can then be imported into the sculptor 115. 5 A patient 300, a camera 310 and an x-ray device 320 are shown in Figure 3. The camera 310 and the x-ray device 320 can be used to capture the image data 137. These captured images can be all from one camera, x-ray machine, or the like. However, some of the more important features of the invention can be realized when mixed modes of image capturing are combined. In this example, 10 two devices, the camera 310 and the x-ray machine 320, are used to capture image data about the patient 300 from multiple vantage points. Other example modes of image capture include MRIs, ultrasound imaging, infrared imaging, and the like. The camera 310 and x-ray 320 are merely symbolic of the fact that images of the patient are captured using various types of imaging technology. 15 In the orthodontic application, it is desirable to have both x-ray (both skeletal and soft tissue) and optical modes for capturing images of the patient 300. Although the camera 310 and x-ray device 320 are shown in the plain of Figure 3, this is not necessary, and in fact, may not be desirable. In the example of an orthodontic application, the preferred x-ray images would include a frontal 20 image, a lateral image, and a frontal image with the head tipped back. The preferred photographic images may include a frontal image and two lateral images. - 16- WO 99/59106 PCT/US99/10566 Example Calibration Frame Figure 4 illustrates a front view and a side view of a calibration frame 140 that may be used in some embodiments of the invention. Images of this calibration frame 140 appears in the patient images 150. The images of the 5 calibration frame 140 can then be used in the sculptor 115 to calibrate the various images. The calibration process includes recording the anatomy with the calibration frame in place using any number of imaging modalities. The 3D location of the calibration markers and an associated co-ordinate system is included as a priori 10 knowledge within the sculptor. Through calibration of each imported image, the sculptor computes the location of the imaging sources as a point source with seven degrees of freedom (DOF). Seven DOF includes the x, y, z, yaw, pitch, roll and focal length of the imaging source. The calibration process maps the associated images into the 3D matrix associated with the calibration frame. Two 15 or more calibrated images, through a process of triangulation, can be used to determine the 3D location of any associated points on the image sets. The calibration frame 140 can include a top strap 405, a strap 410, an adjustment knob 420, and a plexi-glass frame 430. The top strap 405, the strap 410, and the adjustment knob 420, work together to keep the calibration frame 20 in a substantially fixed position on the patient 300. Thus, when the images of the patient are capture, from the various modes, a common reference frame is established. The following describes the calibration frame 140 in greater detail. The strap 410 is designed to encircle the patient's 300 head. The top strap 405 is designed - 17- WO 99/59106 PCT/US99/10566 to prevent the strap 400 assembly from dropping too far down on the patient's head. In some embodiments, the top strap 405 and the strap 410 are part of headgear normally associated with a welding visor from which the face shield has 5 been removed. One of the problems with the welding headgear, by itself, is that it flexes in ways that are undesirable for image capture. This prevents a good common reference frame from being established. Accordingly, a rigid plexi-glass frame 430 is used to mount a number of calibration targets 440. The circumference of 10 the strap 410 is adjusted using a ratchet and a knob 420. They can be used to adjust the amount of overlap between the ends of the strap 410. The calibration targets 440 provide measurement references during the calibration of the various patient images 150. The calibration targets 440 include a number of spherical shapes, possibly having substantial x-ray attenuation 15 properties. However this is not a requirement, and may not be desirable depending on the imaging technology that is being used to capture the patient images 150. Alternative embodiments can include different materials such as cod liver oil capsules as calibration targets 440. Alternative embodiments can also include different shapes of calibration targets 440, such as crosses. Although 20 BBs, such as shot gun pellets, could be used, it is preferred to use bearings because their spherical shape is held to a closer tolerance. A characteristic of the calibration targets 440 is that they are visible in both optical and x-ray images. However, what is important with respect to the calibration targets 440 is that - 18- WO 99/59106 PCT/US99/10566 they provide a fixed reference frame by which patient images 150 can be calibrated. Thus, they should be viewable in each of the patient images 150. Importantly some of the calibration targets 440 can be of different types of materials such that some of the calibration targets 440 appear in some of the 5 images while others of the calibration targets appear in others of the images. As long as enough of the calibration targets 440 are visible in enough of the images, calibration can be performed. Additionally, a single calibration target could be made of different materials. For example, a cod liver calibration target could be positioned very close to a 10 crosshair. The crosshair would indicate the position of the calibration target in the photographs, while the code liver oil capsule would indicate the position of the calibration target in MRI images. Generally the calibration targets 440 are positioned in the calibration frame 140 such that it is unlikely that in any one image the calibration targets will 15 overlap to any great extent. It is also preferable, that at least four of the calibration targets are visible from each image perspective. Thus the shape of the calibration frame for holding the calibration targets 440 may vary from medical application to medical application. Thus multiple different calibration frames can be supported in the sculptor 115. Importantly, when the images have 20 been captured, a session folder is created in which to store the images from the session, and the patient data are stored in a patient folder of the file management system operated under control of the executor 135. The type of calibration frame used 140 can also be stored with that information. -19- WO 99/59106 PCT/US99/10566 The attachment 450 (also referred to as an appliance) represents another way in which calibration targets can be included in images of the patient 300. For example, where a particular image is restricted to a small area of the patient's head, the calibration attachment 450 can be still used to calibrate. For example, 5 where an x-ray image is collimated to only focus on a smaller portion of the patient's face, the calibration targets 440 in the attachment 450 would still appear in that x-ray image. Figure 5 illustrates another embodiment of the calibration frame 140. In this example, the plexi-glass frame 430 has a number of bends instead of the 10 continuous curve shown in Figure 4. This facilities the attachment of calibration attachments 450 to the calibration frame 140. Figure 5 illustrates a top view 502, a front view 504 and a cross section view 506 of the calibration frame 140. The front view 504 and the cross section view 506 illustrate how attachment sites 530 can be included on the calibration 15 frame 140. (Note, the top view 502 does not illustrate the attachment sites 530, but the sites may be viewable from the top view 502.) The cross sectional view 506 illustrates how an attachment can be attached to an attachment site 530 using a captive neural thumb screw 550. The attachment 450 can be stabilized using dowel pins 560. This example illustrates 20 an acrylic appliance (attachment 450). Importantly, Figure 5 illustrates an acrylic appliance support that can be used a part of the calibration frame 140. This is merely illustrative of how appliances or attachments 450 could be attached to the calibration frame 140. What is - 20 - WO 99/59106 PCT/US99/10566 important is that there is some way to include calibration targets 440 in different images that may not include the the calibration frame 140. Example Method of Creating and Using Patient Specific Data Figure 6 illustrates one embodiment of the invention where the programs of 5 Figure 1 are executed on one or more computers 110. In this example, the patient images 150 are calibrated to generate the patient specific model 160. This information is also used to perform a number of analyses on the patient model and related data. Figure 6 can be broken down into three general processes: a capturing of the patient specific data 602, generating the patient 10 model 604, and performing analyses and related work on the patient model and related data 606. Starting with the capturing of the patient specific data 602, at block 610, the calibration frame 140 is mounted on the patient 300 head. This can be done by a technician at a medical facility. The top strap 405, the strap 410 and the 15 adjustment knob 420 can be use to snuggly fit the calibration frame 140 to the patient's 300 head. Next, at block 620, a number of different images of the patient are captured. Importantly, the calibration frame 140 and/or the attachments 450 are included in these images. 20 Next, at block 630, the sculptor 115 is used to import all the image data 137. This image data 137 is now calibrated using the image information of the calibration frame 140. 'In particular, each patient image 150 is associated with a calibration frame template (a computer representation of the calibration frame -21 - WO 99/59106 PCT/US99/10566 140). A user will match a calibration frame template up with the image of the calibration frame 140 in each of the patient images 150. This tells the computer 110 how the calibration frame 140 is oriented and positioned in the image. Thus, all the points within the image can now be associated with, or 5 referenced to, the calibration frame 140. As noted above, the calibration process involves calibrating locations relative to the position of the calibration frame 140. As part of this process, it is convenient to define a coordinate system for a particular patient. This coordinate system can then be mapped into the various views for patient 10 images 150. For example, a first plane may be defined that is parallel to approximately the patient's pupils. A y-plane can then be defined through the mid-sagittal plane of the patient. The last center plane can be determined from the cross product of the other two planes. Next, at block 640, a number of anatomic locations in each of the images are 15 identified. Examples of this process are described below. What is important, however, is that a set of all the anatomic locations in the patient images 150 is defined. Appendix A includes a list of those locations. An example of identifying anatomic location 640 would include such things as identifying the locations of the ears, a trace of the jaw, the various points on 20 specific teeth. The set of anatomic locations that need to be defined is dependent upon what stock model is to be used and how well the resulting morphed patient specific model 160 should match with the patient's exact anatomy. For example, if only a -22- WO 99/59106 PCT/US99/10566 portion of a skull is to be modeled, only those anatomical locations associated with that portion of the skull need be identified. Different medical applications may have different specific features of interest that are identified in the identification of anatomic locations. Orthodontists, for 5 example, tend to concentrate on landmarks in the skull. These landmarks tend to be points concentrated on the mid-sagittal plane, teeth, and jaw. The ability to identify these landmarks and cross correlate them in the various patient images 150 is an important feature for specific medical applications. However, the specific features and anatomic locations that are important to a particular 10 discipline will vary from application to application. Once the calibration frame has been used to calibrate the various patient images 150, however, all images and anatomic locations can then be referenced to that calibration frame. This information can then be stored in the transport file (.scl file). The following describes an example way of identifying a landmark in more 15 than one image. It is important to identify the landmark location in multiple views to completely determine the 3D co-ordinates (x, y, z) of that landmark. To do this, the user can perform the following steps using the computer 110. First, the user selects a point to be identified. Next, the user then places that point in one of the images. The sculptor 115 then generates a line through that 20 point in each of the other images. The line (epi-polar line) originates from the imaging source and is projected through a landmark point of image A and onto all other images mapped into the 3D matrix. The display length of the line projected onto the other images through the point of interest is arbitrary but the length of the line can be constrained by a priori knowledge of geographic region - 23 - WO 99/59106 PCT/US99/10566 of the point of interest associated with each image. The projection of that line is displayed in each of the other images as noted. The display in the other images will appear different because the projection of that line, in 3D space, will be viewed from different perspectives in each of the images. The user now can use 5 the projected line to identify the corresponding landmark location in the other images. By looking along the projected line in each of the images, one can quickly identify where the landmark should be located in that image. Once the point has been defined in two of the images, the point may be automatically defined in all of the other images. 10 Rather than identifying individual points however, it is sometimes desirable to outline, or trace, certain anatomic features. The tracing can be done by specifying one or more connected points in a number of the patient images 150. An example of this would be tracing the outline of an eye socket in an x-ray image. This would be important for certain medical applications relating to the 15 eye. This traced information could then be stored with the landmark location information. By using the calibration frame 140 and the relationship between the various images, an accuracy of 0.1 mm can be achieved with respect to the location of landmarks and the tracing of anatomic parts. 20 Thus the capture of the patient specific data process 602 has been completed. Next, the patient specific model 160 is generated in the generate patient model 604 process. Once all the anatomic locations are identified in the sculptor 115, the data for that patient can be exported to the executor 135. This information can then be - 24 - WO 99/59106 PCT/US99/10566 loaded into the clinician/consultant 125. The clinician/consultant 125 includes a stock model which is to be morphed against the information drawn from the sculptor 115 for a particular patient. In this example, the anatomic locations identified in the sculptor 115 from the patient images 150 are all associated with 5 the calibration frame 140. Thus, exact measurements relative to each other of anatomic locations have been identified. Thus relative location information is then used to morph the stock model into the patient specific model 160. This can be done using standard techniques for morphing data. The resulting patient specific model 160 includes all the object information in the original stock model 10 but has been customized with the measurements of a specific patient. Importantly, this allows significant types of manipulations to be performed that have not previously been performed in medical imaging systems. Examples of these processes are now described. The following describes an example analysis that is performed using the 15 patient specific model 160 and the other patient data. In this example, a patient model is displayed. An example of this is shown in the picture of the clinician/consultant 125 on the left hand side of the display area in Figure 1. In this example, the model is shown as a number of dots in space. Within the clinician/consultant 125, the user can select an analysis type from 20 the analysis window 127. The analysis can be derived from either the landmarks that have been previously identified in the sculptor 115, from the morphed three dimensional model data in the patient specific model 160, or from measurements taken by the user from within the clinician/consultant 125. An example of the analysis performed is shown in the example analysis 170. This conforms to - 25 - WO 99/59106 PCT/US99/10566 block 670 of Figure 6. The example analysis 170 illustrates example output from an analysis procedure. In the case of orthodontics, measurements may be taken to perform any number of standard orthodontic analyses such as a Ricketts analysis or a McGrann analysis. Other types of analysis, treatments, etc. are 5 described in the following sections. As part of the analysis, a user can use the clinician/consultant 125 to rotate and manipulate the patient specific model 160. The patient's face can be mapped onto the generic model from the photographs in the patient images 150. Other types of analysis, or evaluations, performed in the clinician/consultant 10 125 are now described. For a particular tooth, which is a separate object, one may wish to select that tooth and move it somewhat out of its socket. This could be used for example in showing a patient what their teeth would look like if they were straightened. Alternatively, the axis of the tooth for a particular patient may be rotated from its 15 present location to an ideal location. Using the model, by selecting the tooth, one could not only translate the tooth, but rotate it to show it in a different orientation with respect to the other elements in the patient specific model 160. In some embodiments of the invention, the viewing and manipulation of individual objects of the patient specific model 160 is done using the VRX 20 viewer, commercially available from Synthonics Technologies, Inc. The user is able to hide objects in the patient specific model 160. This, would correspond, in the case of a tooth, to extracting a tooth, leaving more room for reorienting other teeth in the jaw. - 26 - WO 99/59106 PCT/US99/10566 In the case of an orthodontics application, there are certain standard viewpoints used by orthodontists. These include an interior view, a superior view, a lateral view, a frontal view, intra-oral view, and/or an extra-oral view. Once adjustments have been made to the generic stock model, to reflect the 5 patient specific features, i.e., the generation of the patient specific model 160, a treatment plan can be applied. The patient specific model 160 can be rendered by placing a skin over the model to show the external appearance after the modifications have occurred. An example of such a system that would allow this is QuickLook trademark rendering product from Synthonics Technologies, Inc. 10 This permits photorealistic texturing of the model, so the patient can see their actual face after the orthodontic work has been completed. When using this invention for various medical applications, one can use the capabilities of the software and the models to display and communicate to patients what will be occurring with respect to their treatment. Patients can 15 visualize the changes that will occur in their mouth as a result of some orthodontic treatment. Further, by using the techniques described herein to generate the three dimensional model, one achieves such a model with a much lower dosage of radiation than would be required, for example, if such a model were constructed 20 by a CAT (Computer Assisted Tomography) scan. Further, by constructing a transport file that contains only limited patient specific information, a user need only identify, for example, two hundred landmarks in an orthodontic application, versus hundreds of thousands or so vertices that would be required for completely defining the patient specific - 27 - WO 99/59106 PCT/US99/10566 model 160 by itself As a result, the full patient specific information can be transmitted through a relatively low bandwidth network in a relatively small amount of time. Such information could be transported over the Internet to insurance companies, patients, and specialists. This is significantly different than 5 what is required to transfer full three dimensional models over a network. Thus, measurement data could be taken in the sculptor 115 and sent to a dental lab across the network very quickly and efficiently. Alternatively, information about the patient specific model 160 can be transmitted across the network with the same minimal bandwidth requirements to a dental lab, where a finished bridge 10 could be produced in accordance with the specification and measurements contained in the patient specific file. Automated Calibration Target Identification The following describes an example system for automatically identifying the calibration targets 440. In this example, BBs will be used as calibration targets, 15 however, the general process can be used for most any type of calibration target. The image of each BB is circular in the patient images 150. This is why the spherical shape was chosen, so it's image would be circular, regardless of the viewing angle. For a given camera geometry, one would expect the blob formed by the image of a BB in the film to have a certain size. Specifically, one would 20 expect a BB to appear with a diameter of a certain number of pixels for a particular camera geometry. One can select the correlation patch corresponding substantially to that expected size and shape and then search over the patient image space looking for correlations between the portion of an image underlying - 28 - WO 99/59106 PCT/US99/10566 the correlation patch with the corresponding pixels for the expected blob pixel set. The points with the highest correlation are likely the locations of the BB. When this has been done for each BB in a particular set of patient images 150, one can identify the centroid of the region in three dimensional space formed by 5 the intersection of the projections of a BB image from different patient images. The centroid location in 3D space is then determined as the center of the BB relative to the calibration frame 140. This information can then be stored and associated with that particular image. The above process describes a fully automated calibration target 10 identification technique. However, in other embodiments of the invention, the user can drag and drop calibration target identifiers near a calibration target, and the computer 110 (the sculptor 115) can look for a calibration target near where the user dropped the calibration target identifier. Sculptor Interface Examples 15 Figure 7 illustrates the sculptor 115 application interface. This is shown as sculptor interface 715. Sculptor interface 715 includes a sculptor toolbar 720 for performing various functions in the sculptor 115. In this example figure, a patient image 750 is being displayed. Patient image includes a view of the patient 300 and view of the calibration frame 140. 20 Figure 8 illustrates the placement of an unaligned calibration template 810 in the patient image area. The unaligned calibration template 810 will be aligned over the next few figures to show how the sculptor 115 can be used to determine a calibration reference frame for a patient image. - 29 - WO 99/59106 PCT/US99/10566 Figure 9 shows that a number of calibration target selectors 910 have been placed over the calibration targets 440 in the patient image 750. The calibration target selector 910 is dragged and dropped onto a calibration target in the image of the calibration frame 140. 5 Figure 10 illustrates the partially aligned calibration template 110. The partially aligned calibration template 1010 has been aligned using the calibration target selectors 910. Figure 11 illustrates the aligned calibration template 1110. The aligned calibration template 1110 now provides the sculptor 1115 with a reference frame 10 for the patient image 750. Figure 12 illustrates the sculptor interface 715 having a second patient image 1250 being displayed. The calibration frame 140 can be seen in both of the images. Additionally, the aligned calibration template 1110 can be seen. A similar alignment process was performed to align the calibration template in the 15 patient image 1250. Figure 13 illustrates the placement of a coordinate reference origin 1310 to define a reference plane for use by the user. The reference planes help in the identification of anatomic locations. Figure 14 illustrates creating a trace using the sculptor 1115. In this 20 example, a trace name 1410 is displayed in the sculptor toolbar 720. In this example, the trace name 1410 is the mid-line soft tissue trace. A user has traced his midline soft tissue trace 1410 in at least two of the patient images 150. The midline soft tissue trace 1410 is also shown in the patient image 1450. This may be the preferred image to trace the mid-line soft tissue trace 1410. The reason - 30 - WO 99/59106 PCT/US99/10566 for this is that the profile of the patient's soft tissue is most easily seen in this image. The midline soft tissue trace 1410 can also be defined in the patient image 750. The sculptor 115 then propagates this trace to the patient image 1250. 5 Figure 15 illustrates a user interface enhancement in the sculptor 115. In particular the sculptor 115 allows the user to perform a localized histogram normalization in a patient image. For example, localized enhancement 1510 is performed in the patient image 1250. This allows the user to enhance portions of the various images. 10 Figure 16 illustrates a trace and landmark view 1620 where the patient image 1250 is removed from the display. This further allows the user to determine where traces and landmarks are being positioned. In particular the example traced 1610 is shown. By removing the patient image 1250, the example trace 1610 can be more easily seen. 15 Figure 17 illustrates a rotated view 1720 of the example trace 1610. The user can rotate the traces, or otherwise manipulate the traces. In one embodiment of the invention, the viewer tool used in the clinician/consultant 125 is also used for rotating and displaying the traces. Figure 18 illustrates a number of landmarks and traces being displayed in 20 multiple images. In particular landmarks and traces can be seen in the patient image 1450, the patient image 750, the patient image 1850, and the patient image 1250. This allows for some basic measurement and analysis features of the sculptor 115 to be used. -31 - WO 99/59106 PCT/US99/10566 Figure 19 illustrates a measure between a bone and a soft tissue point. In this example, the distance measured is the distance from the post nasal spine landmark to the midline soft tissue trace. The measurement line 1910 illustrates where the measurement is taking place. The measurement information 1920 5 shows how many millimeters long the measurement line 1910 is. This calculation can be made because the patient image 1250 has been calibrated with the calibration template 140. Figure 20 illustrates how epi-polar lines 2020 can be used in the placement and location of landmarks and traces. In this particular example, a post nasal 10 spine landmark is being shown using epi-polar lines 2020. The post nasal spine landmark 2010 was first placed in the patient image 1250. This caused epi-polar lines to be shown in the other two images. Next the post nasal spine landmark 2010 was identified on one of the epi-polar lines in the patient image 1850. This automatically placed the post nasal spine landmark marker 2010 into the last 15 patient image 2050. Figure 21-24 illustrates the creation of a trace. In Figure 21, a right mandible trace 2110 is shown. This trace was performed by a sequence of click and drag actions. The right mandible trace 2110 includes a number of trace points, such as a trace point 2120. These trace points are connected in the line 20 to form the trace. Now the user will make the same trace in a different patient image. In Figure 22, a portion of the right mandible trace 2110 is being created in the patient image 1850. The trace line 2200 corresponds to the right mandible trace 2110. The trace line 2200 has, or will have, the same number of points as -32 - WO 99/59106 PCT/US99/10566 the right mandible trace 2110. The sculptor 115 ensures this. When the user places a trace point down such as the trace point 2210, the trace line 2200 extends from that point and shows where the next trace point 2220 would be placed along the trace line 2200. The user can then manipulate the location of 5 this next trace point 2200 by dragging the trace line 2200 into the appropriate location. When the user has placed the next trace point 2200 in the appropriate location, the user can then select using a mouse click to position the next trace point 2200 permanently. Then another trace point is displayed until all of the trace points in the right mandible trace 2110 have been placed. This provides a 10 particularly simple method of propagating a trace from one patient image to the next patient image. Figure 23 illustrates the placement of the trace point 2220 after the user has clicked on the mouse. Once the right mandible trace 2110 has been completely laid down (2D trace) in the patient image 1850, the epi-polar lines of points on 15 the right mandible trace can be projected onto any other calibrated patient image. These epi-polar lines constrain the identification of the corresponding landmark to those lines. Once the correspondence has been completed and the 3D co ordinates of the points on the trace have been computed (triangulated) then the right mandible trace 2110 can automatically be propagated to another calibrated 20 patient image. Thus by defining the right mandible trace 2110 in two images, the right mandible trace 2110 has been completely defined in the space corresponding to the calibration frame 140. Therefore wherever the calibration frame 140 is identified, the corresponding location of the right mandible trace 2110 can be determined by the sculptor 115. -33 - WO 99/59106 PCT/US99/10566 Clinician/Consultant Interface Examples Figure 25 through Figure 40 illustrate various user interface features of the clinician/consultant 125. These figures illustrates how a user can access the patient specific model data, create and manipulate the patient specific model 160, 5 and perform analysis, treatment and the like on the model and the model data. Figure 25 illustrates an .SCL file load window 2510 that can be use to load a .SCL file. In this example, the .SCL file is the patient specific file 2520 that was generated in Figure 7 through 24. Figure 26 illustrates the morphing interface 2600 that can be part of the 10 clinician/consultant 125. Here the patient image 750 is shown with a partially morphed stock model 2610. After the morphing is complete, the patient specific model 160 is created. The morphing interface 2600 need not be used by the medical practitioner, but it does help illustrate the morphing process. Figure 27 illustrates a morphed and texture mapped patient specific model 15 view 2710. Here the patient specific model view has been rotated. Note that the photo image texture mapped onto the model is the patient image 750. Figure 28 illustrates a wireframe view 2810 of the patient specific model 160. The morphing interface 2600 allows the user to rotate the view of the patient specific model 160 and to change the way in which the model is displayed. 20 Figure 29 illustrates a dot contour view 2910 of the patient specific model 160. The dot contour view 2910 shows the points that are used to define the patient specific model 160. During the morphing process, the points in the stock model are repositioned, according to the patient model data, to create the patient specific model 160. -34 - WO 99/59106 PCT/US99/10566 Figure 30 illustrates the clinician interface 3010. The clinician interface 3010 is the interface that would normally be used by the medical practitioner when performing analysis, developing a treatment, or presenting information to the patient. In this example, the clinician interface 3010 includes a patient specific 5 model flesh view 3010. Figure 31 illustrates the patient specific model skull view 160 having a number of landmarks and control points showing. Figure 32 illustrates an example analysis that has been performed. Here a Steiner analysis has been performed on some of the anatomical locations 10 identified in the sculptor 115. The analysis window 127 shows the results of the analysis. The patient image 1250 and the dot view of the patient specific model 3210 show the analysis lines 3230. Normally, the medical practitioner would have had to draw these lines on the x-ray image, and then measure those lines. With the clinician/consultant 125, these processes are automated. 15 Figure 33 illustrates a partially planned treatment where an arch form template 3320 has been put into the jaw object dot display 3310. Importantly, the jaw object can be selected and manipulated separately from the rest of the patient specific model 160. Additionally, the medical practitioner can place an arch form template 3320 and perform simulations of how the teeth may will be 20 affected by a particular treatment. Note that the user interface now includes a tool for defining the wire, pivot points, and alignment. Figure 34 illustrates a jaw object solid display 3410 where a particular tooth has been selected (shown as tooth selected display 3420). Figure 35 illustrates a similar view, except the jaw object dot display 3510 is shown instead. -35 - WO 99/59106 PCT/US99/10566 Figure 36 illustrates the where the user has partially extracted and tilted the tooth. This could be used to show a patient what an extraction would look like. Figure 37 illustrate the top view of this configuration. Figure 38 illustrates the jaw object solid display 3410 where the tooth has 5 been extracted. Figure 39 illustrates another feature of the clinician/consultant 125 user interface where slice planes have been placed through the jaw object. The jaw object display 3920 is used for positioning the slice planes (e.g., slice plane 3910). The jaw object display 3930 shows the results of the slice plane 3910. 10 The clinician/consultant 125 user interface allows the user to position and control multiple slice planes through the object. Figure 40 illustrates a partially transparent slice plane 4010 and a partially transparent slice plane 4020 positioned though the jaw object display 3920. The jaw object display with transparent slices 4030 shows the result of the slice 15 planes. -36- WO 99/59106 PCT/US99/10566 Appendix A The following table shows the landmarks and traces used in the creation of the patient specific model 160. Other embodiments of the invention can use other landmarks and/or traces. The following list has been chosen as they represent commonly referred to anatomic landmarks. // ===== landmarks ==== abbr Description / / .........
LB ME Menton LB GN Gnathion LB PG Pogonion LB B B Point LB-ID Infradentale LB LIE Lower Incisor Incisal Edge LB ADP Anterior Downs Point LB UIE Upper Incisor Incisal Edge LB UIL Labial of the Upper Incisor LB SD Supradentale LB A A Point LB-ANS Anterior Nasal Spine LB-UIA Upper Incisor Apex LB UIB Upper incisor Lingual Bony Contact Point LB-LIB Lower incisor Lingual Bony Contact Point LB-LIA Lower incisor Apex LB SYM Lingual Symphyseal Point LB PMC Premolar Mesial Contact point LB PDC Premolar Distal Contact point LB LMR Lower Molar Root Apex LB LMJ Lower Molar Mesial CEJ LB LMC Lower Mesial Contact LB UMT Upper Mesial Cusp Tip LB PDP Posterior Downs point LB LMIT Lower molar Mesial Cusp Tip LB UMJ Upper molar Mesial CEJ LB UMR Upper molar Root Apex LB UDT Upper molar Distal Cusp Tip LB FPP Functional Occlusal Plane Point LB LAB L Ant Border Ramus LB RAB R Ant Border Ramus LB LGO L Gonion LB RGO R Gonion LB GOI Gonial Intersetction LB LPB L Post Border Ramus LB RPB R Post Border Ramus LB PSE Posterior Skull External LB PSI Posterior skull internal LB OCP Occipital Protuberance LB I Inion LB OP Opisthion LB BP Bolton point LB BA Basion LB AR Articulare Posterior LB AA Articulare Anterior LB CO Condylion LB LCO L Condylion LB RCO R Condylion LB PO Porion LB LAPO L Anatomic Porion -37- WO 99/59106 PCT/US99/10566 LB RAPO R Anatomic Porion LB S Sella Turcica LB SE Ethmoid Registration Point LB GBI Glabella internal LB GB Glabella LB FSS Frontal Sinus Superior LB LSUP L Supraorbitale LB RSUP R Supraorbitale LB FSI Frontal Sinus Inferior LB FMN Frontomaxillary nasal suture LB N Nasion LB NB Nasal Bone LB LLO L Lateral Orbit LB RLO R Lateral Orbit LB LOR L Orbitale LB ROR R Orbitale LB IZ Inferior Zygoma LB PNS Post Nasal Spine LB PTMI Pterygomaxillary Fissure Inferior LB PTMS Pterygomaxillary Fissure Superior LB LCP L Coronoid LB RCP R Coronoid LB 3VS 3rd Vertebra Superior LB 3VA 3rd Vertebra Anterior LB 3VI 3rd Vertebra Inferior LB 3VP 3rd Vertebra Posterior LB 3VC 3rd Vertebra Canal LB 3VSP 3rd Vertebra Spine LB 4VS 4th Vertebra Superior LB 4VA 4th Vertebra Anterior LB 4VI 4th Vertebra Inferior LB 4VP 4th Vertebra Posterior LB 4VC 4th Vertebra Canal LB 4VSP 4th Vertebra Spine LB COL Columella LB LA L Articulare LB RA R Articulare LB LM L Mastoid LB RM R Mastoid LB LAGO L Antegonion LB RAGO R Antegonion LB LR1 L R1 LB RR1 R R1 LB LR2 L R2 LB RR2 R R2 LB LR3 L R3 LB RR3 R R3 LS GL' Soft Tissue Glabella LS Na' Soft Tissue Nasion LS No' Nose Tip LS Sn' Subnasale LS A' Soft Tissue A Point LS UL' Upper Lip LS LL' Lower.Lip LS B' Soft Tissue B Point LSPog' Soft Tissue Pogonion LS Gn' Soft Tissue Gnathion LS Me' Soft Tissue Menton LS URh' Upper Rhinion LS LRh' Lower Rhinion LSUEm' Upper Embrasure LS-LEm' Lower Embrasure LSLEyM' L Eye Medial LS_LEyL' L Eye Lateral LS REyM' R Eye Medial LS_REyL' R Eye Lateral LS LLc' L Corner Lip LS-RLc' R Corner Lip LS LNp' L Posterior Nares -38- WO 99/59106 PCT/US99/10566 LS_RNp' R Posterior Nares LS LNIC' L Nasolabial Crease LS RNIC' R Nasolabial Crease LS LTr' L Tragus LS RTr' R Tragus LS-TH' Top of Head // =============Teeth== = = = NOTE: the number's refer to the tooth quadrant (Teeth #s 11-18,21-28,31-38,41-48) and which tooth the abbreviations represent different points on the teeth letters are as follows. For the most part the teeth points represent morph and control points. Control points are points in the patient specific model 160 that can be used to manipulate specific objects. d = distal m = mesial 1 = lingual b = buccal r = root tip i = incisal edge c = cervical abbr landmark Description / / ............ LT 11m21m llm21m Interproximal contact point between teeth #s 11 and 21 LT llr 11r Tooth #11 root tip LT llmi llmi Tooth #11 mesial incisal edge LT 1ldi lldi Tooth #11 distal incisal edge LT l1c llc Tooth #11 cervical LT lldl2m lldl2m Interproximal contact point between teeth #s 11 and 12 LT 12r 12r Tooth #12 root tip LT 12di 12di Tooth #12 distal incisal edge LT 12mi 12mi Tooth #12 mesial incisal edge LT 12c 12c Tooth #12 cervical LT 12dl3m 12d13m Interproximal contact point between teeth #s 12 and 13 LT 13r 13r Tooth #13 root tip LT 13i 13i Tooth #13 incisal tip LT 13c 13c Tooth #13 cervical LT 13dl4m 13d14m Interproximal contact point between teeth #s 13 and 14 LT 14r 14r Tooth #14 root tip LT 141 141 Tooth #14 lingual cusp tip LT 14b 14b Tooth #14 buccal cusp tip LT 14c 14c Tooth #14 cervical LT 14dl5m 14d15m Interproximal contact point between teeth #s 14 and 15 LT 15r 15r Tooth #15 root tip LT 151 151 Tooth # 15 lingual cusp tip LT 15b 15b Tooth # 15 buccal cusp tip LT 15c 15c Tooth # 15 cervical LT 15dl6m 15dl6m Interproximal contact point between teeth #s 15 and 16 LT 16r 16r Tooth #16 root tip LT 16ml 16ml Tooth #16 mesial lingual cusp tip LT 16dl 16dl Tooth #16 distal lingual cusp tip LT 16mb 16mb Tooth #16 mesial buccal cusp tip LT 16db 16db Tooth #16 distal buccal cusp tip LT 16c 16c Tooth #16 cervical LT 16dl7m 16dl7m Interproximal contact point between teeth #s 16 and 17 LT 17r 17r Tooth #17 root tip LT 17ml 17ml Tooth #17 mesial lingual cusp tip LT 17dl 17dl Tooth #17 distal lingual cusp tip LT 17mb 17mb Tooth #17 mesial buccal cusp tip LT 17c 17c Tooth #17 cervical -39- WO 99/59106 PCT/US99/10566 LT 17db 17db Tooth #17 distal buccal cusp tip LT 17dl8m 17d18m Interproximal contact point between teeth #s 17 and 18 LT 18r 18r Tooth #18 root tip LT 18dl 18dl Tooth #18 distal lingual cusp tip LT 18ml 18ml Tooth #18 mesial lingual cusp tip LT 18mb 18mb Tooth #18 mesial buccal cusp tip LT 18db 18db Tooth #18 distal buccal cusp tip LT 18c 18c Tooth #18 cervical LT 21r 21r Tooth #21 root tip LT 21di 21di Tooth #21 distal incisal edge LT 21mi 21mi Tooth #21 mesial incisal edge LT 21c 21c Tooth #21 cervical LT 21d22m 21d22m Interproximal contact point between teeth #s 21 and 22 LT 22r 22r Tooth #22 root tip LT 22di 22di Tooth #22 distal incisal edge LT 22mi 22mi Tooth #22 mesial incisal edge LT 22c 22c Tooth #22 cervical LT 22d23m 22d23m Interproximal contact point between teeth #s 22 and 23 LT 23r 23r Tooth #23 root tip LT 23i 23i Tooth #23 incisal tip LT 23c 23c Tooth #23 cervical LT 23d24m 23d24m Interproximal contact point between teeth #s 23 and 24 LT 24r 24r Tooth #24 root tip LT 241 241 Tooth #24 lingual cusp tip LT 24b 24b Tooth #24 buccal cusp tip LT 24c 24c Tooth #24 cervical LT 24d25m 24d25m Interproximal contact point between teeth #s 24 and 25 LT 25 25r Tooth #25 root tip LT 251 251 Tooth #25 lingual cusp tip LT 25b 25b Tooth #25 buccal cusp tip LT 25c 25c Tooth #25 cervical LT 25d26m 25d26m Interproximal contact point between teeth #s 25 and 26 LT 26r 26r Tooth #26 root tip LT 26ml 26ml Tooth #26 mesial lingual cusp tip LT 26dl 26dl Tooth #26 distal lingual cusp tip LT -26db 26db Tooth #26 distal buccal cusp tip LT 26mb 26mb Tooth #26 mesial buccal cusp tip LT 26c 26c Tooth #26 cervical LT 26d27m 26d27m Interproximal contact point between teeth #s 26 and 27 LT 27r 27r Tooth #27 root tip LT 27db 27db Tooth #27 distal buccal cusp tip LT 27mb 27mb Tooth #27 mesial buccal cusp tip LT 27ml 27ml Tooth #27 mesial lingual cusp tip LT- 27dl 27d1 Tooth #27 distal lingual cusp tip LT 27c 27c Tooth #27 cervical LT- 27d28m 27d28m Interproximal contact point between teeth #s 27 and 28 LT 28r 28r Tooth #28 root tip LT 28ml 28ml Tooth #28 mesial lingual cusp tip LT- 28dl 28d1 Tooth #28 distal lingual cusp tip LT 28db 28db Tooth #28 distal buccal cusp tip LT 28mb 28mb Tooth #28 mesial buccal cusp tip LT 28c 28c Tooth #28 cervical LT 31m41m 31m41m Interproximal contact point between teeth #s 31 and 41 LT 31r 31r Tooth #31 root tip LT 31di 31di Tooth #31 distal incisal edge LT 31mi 31mi Tooth #31 mesial incisal edge LT 31c 31c Tooth #31 cervical LT 31d32m 31d32m Interproximal contact point between teeth #s 31 and 32 LT 32r 32r Tooth #32 root tip -40- WO 99/59106 PCT/US99/10566 LT 32di 32di Tooth #32 distal incisal edge LT 32mi 32mi Tooth #32 mesial incisal edge LT 32c 32c Tooth #32 cervical LT 32d33m 32d33m Interproximal contact point between teeth #s 32 and 33 LT 33r 33r Tooth #33 root tip LT 33i 33i Tooth #33 incisal tip LT 33c 33c Tooth #33 cervical LT 33d34m 33d34m Interproximal contact point between teeth #s 33 and 34 LT 34r 34r Tooth #34 root tip LT 34b 34b Tooth #34 buccal cusp tip LT 341 341 Tooth #34 lingual cusp tip LT 34c 34c Tooth #34 cervical LT 34d35m 34d35m Interproximal contact point between teeth #s 34 and 35 LT 35r 35r Tooth #35 root tip LT 351 351 Tooth #35 lingual cusp tip LT 35b 35b Tooth #35 buccal cusp tip LT 35c 35c Tooth #35 cervical LT 35d36m 35d36m Interproximal contact point between teeth #s 35 and 36 LT 36r 36r Tooth #36 root tip LT 36ml 36ml Tooth #36 mesial lingual cusp tip LT 36mb 36mb Tooth #36 mesial buccal cusp tip LT 36db 36db Tooth #36 distal buccal cusp tip LT 36c 36c Tooth #36 cervical LT 36dl 36dl Tooth #36 distal lingual cusp tip LT 36d37m 36d37m Interproximal contact point between teeth #s 36 and 37 LT 37r 37r Tooth #37 root tip LT 37dl 37dl Tooth #37 distal lingual cusp tip LT 37db 37db Tooth #37 distal buccal cusp tip LT 37mb 37mb Tooth #37 mesial buccal cusp tip LT 37ml 37ml Tooth #37 mesial lingual cusp tip LT 37c 37c Tooth #37 cervical LT 37d38m 37d38m Interproximal contact point between teeth #s 37 and 38 LT 38r 38r Tooth #38 root tip LT 38db 38db Tooth #38 distal buccal cusp tip LT 38mb 38mb Tooth #38 mesial buccal cusp tip LT 38dl 38dl Tooth #38 distal lingual cusp tip LT 38ml 38ml Tooth #38 mesial lingual cusp tip LT 38c 38c Tooth #38 cervical LT 41r 41r Tooth #41 root tip LT 41di 41di Tooth #41 distal incisal edge LT 41mi 41mi Tooth #41 mesial incisal edge LT 41c 41c Tooth #41 cervical LT 41d42m 41d42m Interproximal contact point between teeth #s 41 and 42 LT 42r 42r Tooth #42 root tip LT 42di 42di Tooth #42 distal incisal edge LT 42mi 42mi Tooth #42 mesial incisal edge LT 42c 42c Tooth #42 cervical LT 42d43m 42d43m Interproximal contact point between teeth #s 42 and 43 LT 43r 43r Tooth #43 root tip LT 43d 43d Tooth #43 distal interproximal contact LT 43c 43c Tooth #43 cervical LT 44m 44m Tooth #44 mesial surface interproximal contact LT 44d 44d Tooth #44 distal surface interproximal contact LT 44r 44r Tooth #44 root tip LT -44b 44b Tooth #44 buccal cusp tip LT- 441 441 Tooth #44 lingual cusp tip LT 44c 44c Tooth #44 cervical LT-45m 45m Tooth #45 mesial surface interproximal contact -41- WO 99/59106 PCT/US99/10566 LT 45d 45d Tooth #45 distal surface interproximal contact LT 45r 45r Tooth #45 root tip LT 45c 45c Tooth #45 cervical LT 45b 45b Tooth #45 buccal cusp tip LT 451 451 Tooth #45 lingual cusp tip LT 46m 46m Tooth #46 mesial surface interproximal contact LT 46r 46r Tooth #46 root tip LT 46mb 46mb Tooth #46 mesial buccal cusp tip LT 46db 46db Tooth #46 distal buccal cusp tip LT 46dl 46d1 Tooth #46 distal lingual cusp tip LT 46c 46c Tooth #46 cervical LT 46ml 46ml Tooth #46 mesial lingual cusp tip LT 46d47m 46d47m Interproximal contact point between teeth #s 46 and 47 LT 47r 47r Tooth #47 root tip LT 47mb 47mb Tooth #47 mesial buccal cusp tip LT 47db 47db Tooth #47 distal buccal cusp tip LT 47ml 47ml Tooth #47 mesial lingual cusp tip LT 47dl 47dl Tooth #47 distal lingual cusp tip LT 47c 47c Tooth #47 cervical LT 47d48m 47d48m Interproximal contact point between teeth #s 47 and 48 LT 48r 48r Tooth #48 root tip LT 48mb 48mb Tooth #48 mesial buccal cusp tip LT 48db 48db Tooth #48 distal buccal cusp tip LT 48ml 48ml Tooth #48 mesial lingual cusp tip LT 48dl 48d1 Tooth #48 distal lingual cusp tip LT-48c 48c Tooth #48 cervical tracedata S0, SOFT, "TSMIDLINE", "Midline Soft Tissue Trace"), { 1, SOFT, "TS REBROW", "R Eyebrow trace")}, S2, SOFT, "TS LEBROW", "L Eyebrow Trace"), S3, SOFT, "TS REAR", "R Ear trace"}, S4, SOFT, "TS LEAR", "L Ear Trace"), S5, SOFT, "TS REYE", "R Eye Trace"), S6, SOFT, "TS LEYE", "L Eye Trace"), S7, SOFT, "TS RLIP", "R Lip Trace"}, 8, SOFT, "TS LLIP", "L Lip Trace"), 9, SOFT, "TS RN", "R Nares Trace"), 10, SOFT, "TS LN", "L Nares Trace"), 11, SOFT, "TS RTH", "R Top of Head Trace"}, 12, SOFT, "TS LTH", "L Top of Head Trace"), 13, SOFT, "TS FH", "Front of Head Trace"}, 14, BONE, "TB RLO", "R Orbit Trace"), 15, BONE, "TB LLO", "L Orbit Trace"), 16, BONE, "TB RMAND", "R Mandible trace"}, 17, BONE, "TB LMAND", "L Mandible trace"), 18, BONE, "TB RATop", "R Arch Top Trace"}, 19, BONE, "TB RABot", "R Arch Bottom Trace"), 20, BONE, "TB LATop", "L Arch Top Trace"), 21, BONE, "TB LABot", "L Arch Bottom Trace"}, 22, BONE, "TB_-MIDLINE", "Midline Bone Trace"}, There is a single trace for occlusal table for each of the 32 teeth. In addition, there is one trace for both dental arches that extends along the central groove and incisal edges of teeth. { -1, MAX, "'\0'", "'\0'"), -42- WO 99/59106 PCT/US99/10566 Appendix B The following presents cephalometric analysis that can be performed using the clinician/consultant 125. Stener Amays~is Landmarks Mean Angle Mean Measuremnent SNA 82 SNB 80 ANB 2 SND 76 Ul-NA 22 4 L1-NB 25 4 Po-NB Po to Li-NB Ul-L1 131 Occl-SN 14 GoGn-Sn 32 Tweed~m Anyi Lanidmarks Mean Ange Mean Mesrement (deres) (mm) FMIA 22 IMPA 86 FMIA 62 SNA 82 SNB 80 ANB 2 Wits Appraisal ACBO 1 Occlusal Plane to Frankfurt 8 Plane Z Angle 76 Gonion to PC 64 Menton -ANS 67 Mc~rann Analysi Landmarks Mea Angle Mean Measurement (degrees) (mm) Palatal to Mandibular Plane 23 Mandibular Plane to 25 Frankfurt Y Axis 67 SNA 82 .. Mandible to Cranium 3 SNB 80 ANB 2 Wits Appraisal ACBO 1 - 43 - WO 99/59106 PCT/US99/10566 Mtlnn Analysi Landmarks Mean Angle Mean~ Measurement Interincisal Angle 130 IMPA 86 Lower 1 to Nasion-B 4 B1 to A-Po Plane 75 Al to McNamera Line 4 Nasolabial Angle 115 Lower Lip to Esthetic Plane -2 Condylion-Gnathion 115 Condvlion-A Point 91 Menton-ANS __67 JBahack Skeletal Anl yis Landmrks Man Angl Mean M ment (degrees) Imm) Saddle Angle N-S-A 123 (+3) Articular Angle S-I-Go 143 (+6) Gonial Angle I-Go-Me 130 (+7) Sum Total 396 Bisecting Gonial Angle Upper 52-55 3 N-Go Bisecting Gonial Angle Lower 70-75 N-Go Anterior Cranial Base Length 71 (+/- 3) Posterior Cranial Base Length 32 (+/- 3) Ramus Height A-Go 44 (+/- 5) Body Length Go-Me 71 (+/- 5) Mandibular Body to Anterior 1:1 Cranial Base Ratio SNA (Jarabak) 78 SNB 76-75 ANB 2 SN-GoGn Angle 32 Facial Depth N-Go Facial Length on Y Axis Y Axis to Sn Angle 64-68 Posterior Facial Height S-Go Anterior Facial Height N-Me Ratio Posterior Facial Height to Anterior Facial Height (%) Facial Plane Angle Sn-Me 82 Jaral k Dental Analysis LdXnarks Men Angl Ma Measurement (derees) (mm) Occlusal Plane to GoMe Angle Interincisal Anle 1-1 133 L1 to GoGn Angle 90 (+/-3) - 44 - WO 99/59106 PCT/US99/10566 Jarabatk Deintal Anadysis Landmarks Mean Angle Meani Measurement (d ee (mm) Ul-Sn Angle 102 (+/-2) Ul-Facial Plane (NPo) 5 (+/-2) L1 to Facial Plane (NPo) 2 (+/-2) Facial Esthetic Upper Lip -1 (-4) Facial Esthetic Lower Lip 0 (+2) .. lark Analy . Landmarks Mea Angle Mean Mearnurement (degrees)( ) Mandibular Arc 27.4 .... Cranial Base Angle 27.8 Facial Axis Angle 27.6 Mandibular Plane to Frankfurt 25.2 Cranio-Mandibular Angle 25.2 Condvlar Axis Angle 29.9 Upper Incisor Angle 23.1 Lower Incisor Angle 28.8 Maxillary Plane Angle -0.4 Nasal Angle 7.1 Occlusal Plane to Frankfurt 21.1 Convexity 1.9 B1 to A-Po Plane 1.0 A6 Molar Position to PTV 14.5 Lower Lip to Esthetic Plane -1.6 ... Da.. ......... Lan~dmarks Mean* Angle Mean Meauremenit (de rees (mm) Facial Angle 88.1 Mandibular Plane 24.4 Y-Axis 59.4 Angle of Convexity 0. 1 ANB 2 Interincisal Angle 130 MD 1 to APo 22 1 Md 1 to Mand. Plane 90 Mx 1 to APo 3.5 Nasion-ANS 50 Nasion-Menton 111.2 Facial Height Ratio rL45 Rickefis Lateral Analysis Landmnarks Mean Angle Mean Measurement (degrees Imm) Interincisal Angle 130 Chin in Space Facial Axis 41 90 +/-3 -45 - WO 99/59106 PCT/US99/10566 Ricketts Lateral Analysis Landmarks Mean Angle Mean Measurement (degres)(m Facial Angle Depth-> 2 87 +/-3 Mandibular Plane4-3 26 +/-4 Facial Taper-4 68 +/-3 Lower Facial Height45 47 +/-4 Mandibular Arc-4 6 26 +/-4 Convexity Convexity A point 47 2 +/-2 Total Facial Height 60 Teeth Lower incisor to Apo 48 1 +/-2 Upper molar to PtV4 10 age +3 +/-4 Mandibular incisor 22+/-4 inclination- 9 Lower lip to E Plane411 -2 +/-2 Ricke ts Frota~l Anabw~is Landmarks Mean Angle Mean Mesuement (derees) (mm) Intermolar Width (B6-B6) 54 Denture Midline 0 Max-Mand Width Left -10.2 Max-Mand Width Right -10.2 Denture to Jaw Midline 0 Nasal Width 26.7 Maxillary Width (J-J) 61.8 Mandibular Width (AG-GA) 76 14 BA-CC-Po 24 FH-N-Po 3 4NMP -FH 44 54ANS-Xi to Xi to PM (PM = Point on anterior border of symphysis between Point B and Pogonion wher the curvature changes from concave to convex Xi constructed point 64Xi-PM plane to X -DC (DC=a point selected in the center of the neck of the condyle wher the basion-Nasion plane crosses it. Basion (BA) most inferior posterior p;oint oof the occipital bone 74A-Po plane to N-Po plane 84 -Tip lower incisor to APo 94Apo-Lower incisal angle (Incisal angle formed along the vertical long axis of tooth) 104 114E Plane -Soft tissue tip of Nose to Soft Tissue Pogonion-Lip is lower lip .Z III?::::::::: ............. ............. ....... ::: ........ .. ...... . ...... Landmarks Mean Angle Mean Measurement (degrees) (mmt) Palatal Plane Length Not Established Pogonion to ANS Arc Not Established - 46 - WO 99/59106 PCT/US99/10566 Sasout Aayi Landmark Mean Angle Mean Measurement (degres)(m B point to A Arc Not Established (Al) to ANS arc Not Established ULA Upper Lip Not Established Lower Incisor to Mand. Plane 86 Upper Part of Gonial Angle 53 Lower Part of Gonial Angle 72 Palatal Plane to Upper Incisor 115 - 47 - WO 99/59106 PCT/US99/10566 Appendix C The following describes the tooth labeling conventions used in one embodiment of the invention. Most General Dentists in the US use this # convention. Orthodontists use this convention when communicating w/ the General Dentist (GP) as far as extraction, etc. This view is as if you are looking directly at the patient's face (Your left is the patient's right, Your right is the Patient's left) Patient's Maxillary Right Quadrant Patient's Maxillary Left Quadrant Patient's Mandibular Right Quadrant Patient's Mandibular Left Quadrant This line represents the patient's midline 1-32 AdultTeeth USA 1 2 3 4 5 6 7 8 9 10 11 12 13 1415 16 (Permanent) 3231302928272625 2423 22 21 201918 17 Permanent tooth name Number System 1 -32 Maxillary Right Third molar (Wisdom) 1 Upper Right Maxillary Right Second molar (12 Year) 2 Quadrant Maxillary Right First Molar (6 Year) 3 1 - 8 Maxillary Right Second Bicuspid (2nd Premolar) 4 Maxillary Right First Bicuspid (1st Premolar) 5 Maxillary Right Cuspid (Canine) 6 Maxillary Right Lateral Incisor 7 Maxillary Right Central Incisor 8 Maxillary Left Central Incisor 9 Upper Left Maxillary Left Lateral Incisor 10 Quadrant Maxillary Left Cuspid (Canine) 11 9 - 16 Maxillary Left First Bicuspid (1st Premolar) 12 Maxillary Left Second Bicuspid (2nd Premolar) 13 Maxillary Left First Molar (6 Year) 14 Maxillary Left Second Molar (12 Year) 15 Maxillary Left Third Molar (Wisdom) 16 - 48 - WO 99/59106 PCT/US99/10566 Permanent tooth name Number System 1 - 32 Mandibular Left Third Molar (Wisdom) 17 ower Left Mandibular Left Second Molar (12 Year) 18 Quadrant Mandibular Left First Molar (6 Year) 19 17 -24 Mandibular Left Second Bicuspid (2nd Premolar) 20 Mandibular Left First Bicuspid (1st Premolar) 21 Mandibular Left Cuspid (Canine) 22 Mandibular Left Lateral Incisor 23 Mandibular Left Central Incisor 24 Mandibular Right Central Incisor 25 owerRg Mandibular Right Lateral Incisor 26 Q uadran t Mandibular Right Cuspid (Canine) 27 25 - 32 Mandibular Right First Bicuspid (1st Premolar) 28 Mandibular Right Second Bicuspid (2nd Premolar) 29 Mandibular Right First Molar (6 Year) 30 Mandibular Right Second Molar (12 Year) 31 Mandibular Right Third Molar (Wisdom) 32 a-t Deciduous Teeth USA a b c d e f g hi j (Primary) tsr q p on mlk Deciduous Tooth Name Alpha System a-t Maxillary Right Deciduous Second Molar (2 Year) a Maxillary Right Deciduous First Molar b Maxillary Right Deciduous Cuspid (Canine) c Maxillary Right Deciduous Lateral Incisor d Maxillary Right Deciduous Central Incisor e Maxillary Left Deciduous Central Incisor f Maxillary Left Deciduous Lateral Incisor g Maxillary Left Deciduous Cuspid (Canine) h Maxillary Left Deciduous First Molar I Maxillary Left Deciduous Second Molar (2 Year) j Mandibular Left Deciduous Second Molar (2 Year) k Mandibular Left Deciduous First Molar I Mandibular Left Deciduous Cuspid (Canine) m Mandibular Left Deciduous Lateral Incisor n Mandibular Left Deciduous Central Incisor o Mandibular Right Deciduous Central Incisor p Mandibular Right Deciduous Lateral Incisor q Mandibular Right Deciduous Cuspid (Canine) r Mandibular Right Deciduous First Molar s Mandibular Right Deciduous Second Molar (2 Year) t - 49 - WO 99/59106 PCT/US99/10566 Other Notations can be used for Missing, Supernumerary (extra) , pontics, etc. used after the Number. S or s for Supernumary P or p for pontic X or x for missing I or i for Implant d for deciduous Example : 7s = supernumerary (extra) Maxillary Right Lateral 4X = # 4 missing, missing Max. Right 2 nd Bicuspid 34p 5p 6 = bridge from # 3 - # 6, (#4 & #5 are pontics) Most Orthodontists in the USA use the following # convention: Patient's Maxillary Right Quadrant Patient's Maxillary Left Quadrant Patient's Mandibular Right Quadrant Patient's Mandibular Left Quadrant This mne represents the patient's midline 1 -8 in each quadrant - Permanent Teeth USA starting w/ 1 in the middle, therefore ALL l's are central incisors, ALL 4's are 1 st bicuspids, ALL 8's are 3 rd molars (wisdom teeth), etc. 87654321 12345678 87654321 12345678 Permanent tooth name Number System 1-8 in each Quadrant Maxillary Right Third molar (Wisdom) 8 Maxillary Right Second molar (12 Year) 7 Maxillary Right First Molar (6 Year) 6 Maxillary Right Second Bicuspid (2nd Premolar) 5 Maxillary Right First Bicuspid (1st Premolar) 4 Maxillary Right Cuspid (Canine) 3 Maxillary Right Lateral Incisor 2 Maxillary Right Central Incisor 1 Maxillary Left Central Incisor 1 Maxillary Left Lateral Incisor 2 Maxillary Left Cuspid (Canine) 3 - 50 - WO 99/59106 PCT/US99/10566 Permanent tooth name Number System 1-8 in each Quadrant Maxillary Left First Bicuspid (1st Premolar) 4 Maxillary Left Second Bicuspid (2nd Premolar) 5 Maxillary Left First Molar (6 Year) 6 Maxillary Left Second Molar (12 Year) 7 Maxillary Left Third Molar (Wisdom) 8 Mandibular Left Third Molar (Wisdom) 8 Mandibular Left Second Molar (12 Year) 7 Mandibular Left First Molar (6 Year) 6 Mandibular Left Second Bicuspid (2nd Premolar) 5 Mandibular Left First Bicuspid (1st Premolar) 4 Mandibular Left Cuspid (Canine) 3 Mandibular Left Lateral Incisor 2 Mandibular Left Central Incisor 1 Mandibular Right Central Incisor 1 Mandibular Right Lateral Incisor 2 Mandibular Right Cuspid (Canine) 3 Mandibular Right First Bicuspid (1st Premolar) 4 Mandibular Right Second Bicuspid (2nd Premolar) 5 Mandibular Right First Molar (6 Year) 6 Mandibular Right Second Molar (12 Year) 7 andibular Right Third Molar (Wisdom) 8 a - e in each quadrant Deciduous Teeth USA edcb abcde edcb abcde Deciduous Tooth Name Alpha System a-e Maxillary Right Deciduous Second Molar (2 Year) e Maxillary Right Deciduous First Molar d Maxillary Right Deciduous Cuspid (Canine) c Maxillary Right Deciduous Lateral Incisor b Maxillary Right Deciduous Central Incisor a Maxillary Left Deciduous Central Incisor a Maxillary Left Deciduous Lateral Incisor b Maxillary Left Deciduous Cuspid (Canine) c Maxillary Left Deciduous First Molar d Maxillary Left Deciduous Second Molar (2 Year) e Mandibular Left Deciduous Second Molar (2 Year) e Mandibular Left Deciduous First Molar d Mandibular Left Deciduous Cuspid (Canine) c Mandibular Left Deciduous Lateral Incisor b Mandibular Left Deciduous Central Incisor a -51 - WO 99/59106 PCT/US99/10566 Deciduous Tooth Name Alpha System a-e Mandibular Right Deciduous Central Incisor a Mandibular Right Deciduous Lateral Incisor b Mandibular Right Deciduous Cuspid (Canine) c Mandibular Right Deciduous First Molar d Mandibular Right Deciduous Second Molar (2 Year) e We use a short hand to diagram individual teeth or groups of teeth, example: = Upper Right Quadrant _ _= Upper Left Quadrant = Lower Right Quadrant = Lower Left Quadrant 4 4 4 4 this short hand is for ALL four 1st biscuspids 1 this shorthand is for the Maxillary Right Central Incisor 818 this is short hand is for both Maxillary 3 rd molars ee S this is short hand for Maxillary Right & Left Deciduous 2 nd molars, e 5 Mandibular Right Deciduous 2 nd molar and the Mandibular Left Permanent 2 nd bicuspid 6edc21 12cde6 This is short hand for ALL four Permanent 1' t Molars (6), 6edc2 11 2cde6 ALL four Deciduous 2d & l t Molars (e& d) ALL four Deciduous cuspids (canines) ( c) ALL four Permanent Lateral incisors (2) - 52 - WO 99/59106 PCT/US99/10566 ALL four Permanent Central Incisors (1) Other short hand used UR = Upper Right UL = Upper Left LR =Lower Right LL = Lower Left Followed by tooth # (Ex: UR1 = Upper Right Central Incisor UR8 UR7 UR6 UR5 UR4 UR3 UR2 UR1 ULI UL2 UL3 UL4 UL5 UL6 UL7 UL8 LR8 LR7 LR6 LR5 LR4 LR3 LR2 LRl LL1 LL2 LL3 LL4 LL5 LL6 LL7 LL8 Permanent tooth name Number System 1 8 in each Quadrant Maxillary Right Third molar (Wisdom) UR8 Maxillary Right Second molar (12 Year) UR7 Maxillary Right First Molar (6 Year) UR6 Maxillary Right Second Bicuspid (2nd Premolar) UR5 Maxillary Right First Bicuspid (1st Premolar) UR4 Maxillary Right Cuspid (Canine) UR3 Maxillary Right Lateral Incisor UR2 Maxillary Right Central Incisor UR1 Maxillary Left Central Incisor UL1 Maxillary Left Lateral Incisor UL2 Maxillary Left Cuspid (Canine) UL3 Maxillary Left First Bicuspid (1st Premolar) UL4 Maxillary Left Second Bicuspid (2nd Premolar) UL5 Maxillary Left First Molar (6 Year) UL6 Maxillary Left Second Molar (12 Year) UL7 Maxillary Left Third Molar (Wisdom) UL8 Mandibular Left Third Molar (Wisdom) LL8 Mandibular Left Second Molar (12 Year) LL7 Mandibular Left First Molar (6 Year) LL6 Mandibular Left Second Bicuspid (2nd Premolar) LL5 Mandibular Left First Bicuspid (1st Premolar) LL4 Mandibular Left Cuspid (Canine) LL3 Mandibular Left Lateral Incisor LL2 Mandibular Left Central Incisor LL1 Mandibular Right Central Incisor LR1 Mandibular Right Lateral Incisor LR2 Mandibular Right Cuspid (Canine) LR3 Mandibular Right First Bicuspid (1st Premolar) LR4 Mandibular Right Second Bicuspid (2nd Premolar) LR5 - 53 - WO 99/59106 PCT/US99/10566 Permanent tooth name Number System 1 8 in each Quadrant Mandibular Right First Molar (6 Year) LR6 Mandibular Right Second Molar (12 Year) LR7 Mandibular Right Third Molar (Wisdom) LR8 Could also be for Deciduous teeth using URa, LL b, etc. Deciduous Tooth Name Alpha & numeric System UR format a-e Maxillary Right Deciduous Second Molar (2 Year) UR e Maxillary Right Deciduous First Molar UR d Maxillary Right Deciduous Cuspid (Canine) UR c Maxillary Right Deciduous Lateral Incisor UR b Maxillary Right Deciduous Central Incisor UR a Maxillary Left Deciduous Central Incisor UL a Maxillary Left Deciduous Lateral Incisor UL b Maxillary Left Deciduous Cuspid (Canine) UL c Maxillary Left Deciduous First Molar UL d Maxillary Left Deciduous Second Molar (2 Year) UL e Mandibular Left Deciduous Second Molar (2 Year) LL e Mandibular Left Deciduous First Molar LL d Mandibular Left Deciduous Cuspid (Canine) LL c Mandibular Left Deciduous Lateral Incisor LL b Mandibular Left Deciduous Central Incisor LL a Mandibular Right Deciduous Central Incisor LR a Mandibular Right Deciduous Lateral Incisor LR b Mandibular Right Deciduous Cuspid (Canine) LR c Mandibular Right Deciduous First Molar LR d Mandibular Right Deciduous Second Molar (2 Year) LR e International Teeth # System 11 - 18, 21 - 28, 31 - 38, 41 - 48 International Adult teeth Each Quadrant numbered by 1" t 1 = Maxillary Right Quadrant I Quadrant 2 2 =Maxillary Left 3 = Mandibular Left Quadrant Quadrant 3 - 54 - WO 99/59106 PCT/US99/10566 4 = Mandibular Right Then tooth # 1 - 8 18 17 16 15 14 13 12 11 21 22 23 24 25 26 27 28 48 47 46 45 44 43 42 41 31 32 33 34 35 36 37 38 International Deciduous Teeth Quadrants 5 - 8 Quadrant 5 Quadrant 6 Quadrant 8 Quadrant 7 Then tooth # 1 - 8 5857565554535251 61626364656667 68 8887868584838281 71727374757677 78 Permanent tooth name First # = Quadrant # second # = tooth # Maxillary Right Third molar (Wisdom) 18 one-eight Maxillary Right Second molar (12 Year) 17 one-seven Maxillary Right First Molar (6 Year) 16 one-six Maxillary Right Second Bicuspid (2nd Premolar) 15 etc. Maxillary Right First Bicuspid (1st Premolar) 14 Maxillary Right Cuspid (Canine) 13 Maxillary Right Lateral Incisor 12 Maxillary Right Central Incisor 11 Maxillary Left Central Incisor 21 two-one Maxillary Left Lateral Incisor 22 etc. Maxillary Left Cuspid (Canine) 23 Maxillary Left First Bicuspid (1st Premolar) 24 Maxillary Left Second Bicuspid (2nd Premolar) 25 Maxillary Left First Molar (6 Year) 26 Maxillary Left Second Molar (12 Year) 27 Maxillary Left Third Molar (Wisdom) 28_ - 55 - WO 99/59106 PCT/US99/10566 Permanent tooth name First # = Quadrant # second # = tooth # Mandibular Left Third Molar (Wisdom) 38 three-eight Mandibular Left Second Molar (12 Year) 37 etc. Mandibular Left First Molar (6 Year) 36 Mandibular Left Second Bicuspid (2nd Premolar) 35 Mandibular Left First Bicuspid (1st Premolar) 34 Mandibular Left Cuspid (Canine) 33 Mandibular Left Lateral Incisor 32 Mandibular Left Central Incisor 31 Mandibular Right Central Incisor 41 four-one Mandibular Right Lateral Incisor 42 etc. Mandibular Right Cuspid (Canine) 43 Mandibular Right First Bicuspid (1st Premolar) 44 Mandibular Right Second Bicuspid (2nd Premolar) 45 Mandibular Right First Molar (6 Year) 46 Mandibular Right Second Molar (12 Year) 47 Mandibular Right Third Molar (Wisdom) 48 Deciduous Tooth Name Quadrant # first then tooth letter Maxillary Right Deciduous Second Molar (2 Year) le Maxillary Right Deciduous First Molar id ight Deciduous Cuspid (Canine) 1 c Maxillary Right Deciduous Lateral Incisor lb Maxillary Right Deciduous Central Incisor la Maxillary Left Deciduous Central Incisor 2a Maxillary Left Deciduous Lateral Incisor 2b Maxillary Left Deciduous Cuspid (Canine) 2c Maxillary Left Deciduous First Molar 2d Maxillary Left Deciduous Second Molar (2 Year) 2e Mandibular Left Deciduous Second Molar (2 Year) 3e Mandibular Left Deciduous First Molar 3d Mandibular Left Deciduous Cuspid (Canine) 3c Mandibular Left Deciduous Lateral Incisor 3b Mandibular Left Deciduous Central Incisor 3a Mandibular Right Deciduous Central Incisor 4a Mandibular Right Deciduous Lateral Incisor 4b Mandibular Right Deciduous Cuspid (Canine) 4c Mandibular Right Deciduous First Molar 4d Mandibular Right Deciduous Second Molar (2 Year) 4e - 56 - WO 99/59106 PCT/US99/10566 Appendix D The following describes a tooth auto-alignment strategy used in the clinician/consultant 125. The clinician/consultant 125 can use the following procedure in determining an appropriate treatment for a patient. 1. Generate tooth alignment template (arch wire). Top View Lateral View Anterior Posterior Anterior Right Posterior Left Tooth Wire Alignment Fit tooth alignment wire to the dental arch Establish midline and first molar locations on the wire. -57- WO 99/59106 PCT/US99/10566 Set incisors Set teeth #s 11 and 21 on opposite sides of the midline mark for the maxilla and set teeth #s 31 and 41 on the S opposite sides of the mandibular Midline. Set Molars Set the mesial surfaces of first . molars on molar wire locations. Set other Teeth Align the remainder of the teeth. The mesial contact of tooth n to contact distal contact of adjacent tooth n+l except for the midline teeth #s 11,21,31 and 41. The mesial contact on tooth #21 contact the mesial contact on tooth #11. Similarly the mesial contact on tooth #31 contacts the mesial contact on tooth # 41. Space Conflict What to do if there is a space conflict, i.e., too much tooth mass between the mesial of the first molar and the distal of the central incisors to fit on the wire. Option 1. Maintain mesial and distal contacts on the wire but have the teeth overlap each other. Produce feedback about quantity of overlap (for each tooth and combined overlap). Option 2. Do not allow teeth to overlap and do not constrain the contact to the wire. Produce a best fit to the space trying to ' conform to the wire. - 58 - WO 99/59106 PCT/US99/10566 Molar Interarch Relationsh ips SMesiobuccal Cusp of * Class I Molars the upper first molar should occlude in the groove between the mesial and distal buccal cusps of the lower Ist molar * Mesial Lingual Cusp Tip of Maxillary 1st molar fits into central fossa in lower molar * Distal Buccal Cusp Tip of Mandibular molar first Molar fits into Central fossa of Upper 1st Molar * The crown of the upper first molar must be angulated so that its distal marginal ridge occludes with the mesial marginal ridge of the lower second molar Mesiodistal Crown Angulation Mesiodistal Crown Angulation For the occlusion to be considered normal, the gingival part of the long axis of the crown must be distal to the occlusal part of the axis. The degree of angulation depends on the type of tooth. Mesiodistal Crown Angulation for Various types of upper teeth - 59 - WO 99/59106 PCT/US99/10566 Labiolingual Crown Inclination -- ------ 7 Tooth Rotations None of the teeth should be 'U*rotated. Rotated molars and premolars occupy more space in the dental arch than normal. Rotated incisors may occupy less space than those correctly aligned. Rotated canines adversely affect esthetics and may lead to occlusal S.interferences. - 60 - WO 99/59106 PCT/US99/10566 Tooth Spacing If there are no anomalies in the shape of the teeth, or inter-maxillary discrepancies in the mesiodistal tooth size, the contact points should abut in normal occlusion. Curve of Spee /:.:* •An excessive curve of Spee restricts the amount of space available for the upper teeth, which must then move toward the mesial and distal, thus preventing correct intercuspation. * A normal occlusion has a flat occlusal plane (according to Andrews, the mandibular curve of Spee should not be deeper than 1.5mm) * A reverse curve of Spee creates excessive space in the upper jaw, which prevents development of a normal occlusion. -61 - WO 99/59106 PCT/US99/10566 AppendixE The following describes an Roth analysis that can be performed using the clinician/consultant 125 and can be particularly helpful in the tooth alignment treatment. Distobuccal cusps of lower first molars (36 and 46) and a midpoint Occlusal plane between the mesioincisal edges of #s 31 and 41. Crown Long axis A line formed between the gingival height of contour and cusp tip for the bicuspids and cuspids A line formed between the gingival height of contour and *** the for incisor teeth A line formed between the gingival height of contour and the origin of the groove between the buccal cusps on the molars. Bracket placement Brackets are placed at the mid-crown point on this crown long axis. Mid-crown equals a point midway between the height at the cusp tips and the gingival height of contour. This is true for all teeth except the maxillary lateral incisors (12 and 21). #s 21 and 12 are set o.5 mm short of the crown midpoint but the final result will have the incisal edge of these teeth 1 m short of the adjacent teeth. Overbite 4 mm Canine To contact point Overjet 2-3 nunmm Canine 1 mm -62 - WO 99/59106 PCT/US99/10566 Appendix F The following describes the specifications of various embodiments of the invention. Description of Product Features This module has been designed to assist with patient analysis, treatment planning and patient education for orthodontists, dentists that perform orthodontics and oral surgeons that perform orthognathic surgery. This module will provide the full range of analysis, modelling and treatment features currently expected with all existing 2D software packages. In addition, this module will allow the soft and hard tissues to be accurately captured and photorealistically displayed and analyzed in three dimensions. Unique algorithms will then be accessed to perform a number of functions with the captured data. Normal Stock Object The stock objects will be used for model matching and used as a template for rapid conversion of the patient input data to a 3 D model. These stock objects will include the jaws, teeth, and soft tissues of the face. The stock objects can be constructed to represent NORMAL for the modeled anatomy. In addition, other stock objects could be designed to provide a closer starting point for common types of anatomical variation or pathology. These variations may include size, sex and facial form (Angle's Class I,II and III). Stock objects will be constructed from a wire frame with a relatively small polygon count. The vertices of the wire frame can be strategically located to allow for the - 63 - WO 99/59106 PCT/US99/10566 subsequent modifications that will allow for rapid customization to adapt to the patient's input data. In the jaws, the stock objects can have a minimum # of "tie down points" that corresponds to "landmark locations". The minimum # of tie down points on a tooth may include those that allow for rapid modification in height, mesiodistal and buccolingual width, and angulation. The wire frame can be mutable. The wire frame can possess a logical behavior among the neighboring wire frame intersects or vertices. That is, when the wire frame is mutated by the end user all of the intersects that are changed and their neighbors can respond in a logical fashion. Landmark groupings can be able to be segmented and moved to a new location to simulate treatment or growth. The movement of these segmented landmarks can occur through data input or manual "Drag and Drop," There can be a method for rapid or automatic registration of the stock object with the input data. The input data can include photographs, x-rays, a previously rendered patient 3-D data set. The stock objects can have a spatial association with a data base. The data base will record the 3-D spatial attitude of the stock object and will record modifications of the stock object. The data base will track the landmark locations and any other changes that relate to the original polygon size and form. - 64 • IRRTITI ITF .qHFFT (RI II F 991 WO 99/59106 PCT/US99/10566 Obje t-Oriented Data This is a feature that the average user of the software may not fully appreciate. However, as the framework of software Base design, it has many advantages from MedScape point of view. the MedScape product line deals with physical entities such as patients and anatomical structures of the face. It produces images from these objects, extracts measurements and produces models of them. It also produces such end user data products as growth prediction and treatment plans. The underlying data structure that can define and relate all these entities in a unified fashion is an object oriented database. The time spent initially on a framework for careful definition of the object classes and their relations in such a database will save tremendous amount of effort and cost in the following developments. In addition to producing reusable computer programs, this approach will facilitate definition and integration of the work by multiple teams, namely MedScape R&D contractors as currently envisioned. Typical examples of objects are a patient, a digital image, a specific mandible, the 3-D model of a specific mandible, a "normal" mandible, a treatment plan, etc. The specific instances of these objects are stored in the database as rows of various tables, where each table represents the object class. Each class is identified by its properties and methods (procedures that are applied to them). Each software development team will concentrate on specific object classes assigned to it with the goal of producing class libraries that expose the properties and methods of each object to all development teams for final integration. -65- WO 99/59106 PCT/US99/10566 3-D ACCURACY Although accuracy numbers for the so called "nominal" conditions can be provided, the accuracy of position and orientation measurements made from one or more images of an object can vary significantly depending on a number parameters. Some of these parameters are inherent to the image sensor resolution and noise which can be considered fixed and are determined off-line. However, an even larger number of these parameters depend on the geometry and size of the very object being measured, and the geometric setup of the imaging sensors. These "variable" parameters of course mean that the accuracy can not be quoted as a unique specification for a measurement system. However, theoretical error bounds can be internally computed from calibration data, camera resolution, and other system parameters for a specific measurement scenario. The proposed software will include the necessary models and algorithms to compute these theoretic error bounds and provide them as part of the measurement results. For example in the case of landmark position measurements, for each measured landmark the software outputs the ellipsoid that represents the error uncertainty in three dimensions. In this way the user is given a yard stick by which the accuracy of each measurement results can be judged. In the case of cephalometric landmark measurements, submillimeter accuracy was predicted and to some extent experimentally validated by exercising these analytic error models. A specific R&D task is to validate the error models using more complete experimentation. MODEL MATCM NG The starting point for modeling an object from multiple images is to retrieve a "stock" or normal version of that - 66 - WO 99/59106 PCT/US99/10566 object model from the database, and use it as a guide for designating the landmarks and traces in multiple views. The discrete features measured from the actual images are then used to modify the stock model to obtain the desired object model. This process requires matching of the stock object model to the actual measurements. The model matching is based on a discrete and limited set of features which will uniquely define a stock object. The software will be able to match the stock model by moving these limited features without distorting the model into a different object. IMAGE FIDELITY The MedScape software typically operates on high quality images. To avoid dealing with large file size, the software uses image compression using the highest compression ratio for which the required image quality is achievable. The image quality is preserved both in terms of being visually acceptable, and in terms of completely preserving the features that are used for 3-D measurements and modeling. For 3-D rendering using photographic texture mapping, the resolution requirement of the rendered result determines the resolution of images at acquisition time SPATIAL The spatial calibration of an image is addressed as part of CALIBRATION the overall geometric calibration of the system. This iCALIBRATiONiiii: i iiii calibration provides a precise and quantitative model of the overall imaging chain including the position and orientation of the imaging sensors relative to the measurement coordinates, the internal geometry of the sensors, and the sampling parameters of the sensor plane to form the final array of pixels. Nonlinear transformations due to perspective and, when applicable, due to optics are modeled and compensated for by the calibration process. IMPROVED Through photogrammetry computations combined with local ASSOCIATION image analysis algorithms, the software automatically - 67 - WO 99/59106 PCT/US99/10566 establishes correspondence of two or more points used for triangulation. This will minimizes the user effort of designating the corresponding points in different views based on visual cues. This automation is achieved by taking into account the geometric constraints imposed by both the imaging system and the object being modeled. 3-D Display 3D display refers to the mode of 3D visualization on a computer screen. The reason MedScape was formed, is to give doctors a convenient, fast and user friendly way to gain accurate 3D information for diagnosis and treatment planning. Today's "state-of-the-art", in orthodontics, orthognathic surgery and plastic and reconstructive surgery diagnosis, is two-dimensional. True three-dimensional visualization and manipulation of the 3D data set is essential for accurate diagnosis and treatment planning. The 3D display allows for the visualization of the 3D data base (created from photos, models, X-rays, etc). This 3D visualization allows for 1) Perspective 3D viewing with shading, shadowing and monocular depth cues. 2) Straight on 3D Stereoscopic viewing and 3) Ability to view the 3D data set in a 45 degree 3D Stereoscopic viewing mode (allows for 50% more visual information). - 68 - WO 99/59106 PCT/US99/10566 The 3D display of the 3D data set can include the following information: 1) Display dimensionally accurate representation of the patients anatomy. (face, facial contours, teeth, gingival tissue, bony anatomy via ceph X-ray or CT, MRI (wire frame and rendered), etc.) 2) A consistent cartesian coordinate system. A right handed Cartesian Coordinate System is defined as: Looking at Front of an object (ex. front of the face) X axis - Horizontal with Positive X to the Right, Y axis - Vertical with positive Y up, Z axis - In/Out with positive toward the viewer All data sources set to this right handed Cartesian Coordinate System. 3) An X,Y,Z, analog in computer display to aid viewer in orientation of 3D data set. For 3D perspective & 3D Stereoscopic 4) Rotation of the 3D Data Set in all 3 planes of space and the ability to control the roll, pitch & yaw movements around these - 69 - WO 99/59106 PCT/US99/10566 axes (6 degrees of freedom). The ability to see the 3D data set from any angle or view. 5) Be able to lock the mouse so the object can be rotated in only one axis at a time and in real time. 6) Be able to easily go back to the original orientation of the data set (example - Frontal View). 7) The user can define and control the rotation of the data set precisely. (1 degree Rotation or smaller) 8) The user should be able to define a rotational pattern around one, two, or three axes, together or independently. 9) The user should be able to move the 3D model in real time: 1. up/down, 2. right /left, &/or 3. in/out. (Stereoscopic vs larger/smaller or Z buffer) 10) See any virtual view at any angle Predefined views 1) Frontal, 2) Right Lateral, 3) Left Lateral, 4) SMV, 5) 45 degree right, 6) 45 degree left, 7) smile, 8) lips in repose vs lips closed. (ABO Requirements) 11) Animation in perspective 3D and in Stereoscopic 3D. (Example: open/closed animation to evaluate deviation on opening, asymmetry, etc. animate mandibular movements associated with jaw tracking). 12) The 3D display should allow for user controlled transparency of facial soft tissue to show underlying teeth and skeletal structure relationship. Transparency should be controlled by a slide bar from 0% - 100% and have predefined 20%, 40%, 60%, 80%, for quick acquisition. 13) Lighting of the 3D data set should be predefined to give the best brightness, contract, etc. Real time lighting changes should be possible to gain better 3D view. Especially important with 3D Stereoscopic viewing on high contrast areas as it gives poor results to the stereoscopic effect. In Stereoscopic mode the lighting should allow for the Stereopairs to be lighted the same. Eliminate difference in lighting of the two separate views, creates ghosting. 14) A Reference plane should be available to show the drawing plane, etc. - 70 - WO 99/59106 PCT/US99/10566 15) The use of zoom, magnify, scaling, set axis, split objects, move segments, trim, grasp objects should be available and user controlled. 16) The software 3D program should show the wireframe, quick render and full render of the 3D data set. Also render a window should be available to render only an area of interest. 17) The 3D Display should use the photographs from which the wireframes are generated to create the photorealistic textures. 18) The camera setting should be predefined. Other setting can be included as: scene camera, director camera, pan, tilt, Roll, Dolly, etc. 19) 3D display allow for import/export of Model files: (.MDL, .DXF, .IGS, .3DS, other) 20) Import/Export of picture formats: (.BMP, .TGA, .GIF, .TIF, .PCX, other). 21) The 3D display should allow for facial topography visualization and measurement. Facial topography contours have certain patterns that differ from people considered "Beautiful" vs "Ugly" vs "Normal". Subtle differences in the nasal area, Zygomatic area (cheek bone), Lip contour, submental fold and chin area. Facial topography will be more evident in stereoscopic 3-D visualization. Features that are used to describe beauty: 1. Cheek: high VS. Flat 2. Chin: asymmetry, prominence, deficiency, cleft 3. Lips: Full, thin, protrusive, retrusive, commisure, vermilion border, ethnic considerations. 4. Nose: Size, width, flaring, alar base, nares, dorsal hump, nasal tip, symmetry, nasal cartilage deviation 5. Smile: gummy, deficient, long face, short face, symmetry 6. Facial: Proportional thirds, symmetry 7. eyes: Symmetry, high, low, prominent vs. recessed. 8. Glabella: Prominent vs. deficient 9. Ears: Symmetry, size, vertical position, morphology -71- WO 99/59106 PCT/US99/10566 3D Steroscopic Display The human visual system is composed of two eyes with a horizontal interocular distance of approximately 65mm. This Horizontal off set of the eyes allows for each eye to "see" a slightly different view in the "real world". This retinal disparity creates Binocular vision in which the brain fuses these two slightly different views into a single view with visual depth perception. Binocular vision allows for visual and intuitive understanding of depth. Convergence of the eyes are controlled by muscular effort that sends proprioceptive input to the neurological system to help determine depth and position of objects. Accommodation (or focusing) of the lens of the eyes also presents a proprioceptive mechanism to determine depth information. Human Binocular vision is also closely associated with proprioceptive feedback from the tactile sense (Haptic = touch and Prehensile = reach and grasp). Retinal disparity is mimicked by creating parallax on the screen. Parallax is the horizontal offset of two separate pictures of the same scene. Parallax can be controlled to create the image behind the screen (positive parallax) , at the screen plane (Zero Parallax) or in front of the screen (negative parallax). - 72 - WO 99/59106 PCT/US99/10566 Negative parallax would be the best choice for 3D stereoscopic viewing of cephalometric images. Stereoscopic 3D imaging allows for all 3 planes of space to be viewed, simultaneously. This is a clear and important difference between 3D Stereoscopic viewing and 3D perspective viewing. When stereoscopic 3D visualization is added to motion parallax (such as rotation of the object) there is an enhancement of visual depth. The 45 degree angular stereoscopic viewing allows the operator to view 50% more of the image. This is an even greater reason to use Stereoscopic viewing. Again motion parallax (rotation) adds even more visual reference. It allows for improved visualization of the Z axis information. Any 3D visual information can be created in a 3D Stereoscopic mode to further enhance to visual ability to understand 3D relationships of anatomy. When motion parallax is also added even greater visual depth information is present. vantages of 3D Stereo vs perspective 3D vs 2D: -73- WO 99/59106 PCTIUS99/10566 Once the 3D model is created (patient face, teeth, skeletal structure), the software program can create the appropriate "stereo pairs" for 3D Stereoscopic viewing. Lighting (brightness, contrast, shadows, etc), can be controlled. The software can create the appropriate parallax on the screen to create the stereoscopic image on the screen when viewed with the appropriate viewing lenses (anaglyph, polarized, field sequential). In order to view stereoscopically on a computer monitor, one can present the two separate images to the corresponding retina in each eye. Anaglyph uses Red & Blue lenses so that each eye only sees the image it is suppose to see. There is some limitation on using colored images with the anaglyph mode. Other mechanisms such as polarized or field segmental are available. Precise control of the vertical and horizontal parallax is critical. Stereo viewing of angular, linear, planes, angles, points, and volume is important. Full color can be done with anaglyph (synthonics) but problems do arise with red, blue and green colors that are part of the image. True full color is best seen with polarized or field sequential (LCD shutters). Field sequential can be 60 or 120Hz. The image flicker can only be eliminated with the 120 Hz. Another advantage of field sequential is tracking devices can be incorporated to allows the viewer to visualize the 3D scene from multiple viewing angles. The multiple viewing angle is an advantage over fixed viewing angle required by anaglyph or polarized viewing techniques. Models as a Data The molds of the teeth give a physical model upon which arch length and treatment decisions are made. These models can be Source mounted on an articulator or not mounted. The articulated mounted models give a more true 3D relationship of the teeth to - 74 - WO 99/59106 PCT/US99/10566 skeletal reference planes. These physical models can be digitized or photographed in order to enter the 3D data information to create the model analysis in the computer and also as a model to be integrated to the face and skeleton from photo & X-ray data sources. The voxel view scanner with confetti projector can be used to photograph the physical models of the teeth and to create the 3D data points for the creation of a wireframe and a photorendered model of the teeth for computer manipulation and integration to the skeletal anatomy and facial topography. A very accurate 3D data set could be made by sectioning, via microtome, a copy of the physical molds of the teeth (in microns) and taking sequential pictures of the "cross sections". These pictures of the cross sections can then be combined into a very accurate 3D data set for use as a 3D model in computer memory or for storage on CD ROM. Data Fusion. .. . Data fusion allows data acquired through independent input devices to be combined (fused) into a single 3-D data base. This requires that all input devices be calibrated to allow for the creation of "dimensionally true" data. In addition, all data sets can be cross-calibrated to use a common 3-D co-ordinate system. The device calibration and the cross-calibration of data sets should be automatic to facilitate the speed of data fusion. The data fusion will occur through the image management system (IMS). Independent data sets will have a mathematical, geometrical or spatial relationship to each other that can be calibrated or related through the process of data fusion. These mathematical, etc. relationships will be monitored through the -75- WO 99/59106 PCT/US99/10566 use of the IMS data base. FDA Approval There has been a push in recent years by the ADA and FDA to have all medical devices, including soft ware, FDA approved. This approval is not required at this time but is desirable. In essence the software can be approved to validate that it can do what MedScape claims it can do. The software can provide safety features to assure that the patients anatomy is not be misrepresented. There should be a record of all digital alterations of the original input data. Input Data Sources for Cephalometric Module Cephalometric The standard cephalometnric technique requires a 60 inch source to mid- cephalostat distance and a 15 cm from the mid ceph. to film plane distance. The beam is projected in the horizontal plane and perpendicular to the film plane. The cephalostat orients and stabilizes the head through the use of ear rods. The standard cephs are acquired from the lateral and frontal directions. Standard and non-standard cephalometric images can be utilized for MedScape software. Non standard cephs can alter any of the above parameters. Calibration markers can be used. These calibration markers will calibrate the imaging device, be used for cross-calibration of the data sets and may be used to calibrate the gray scale. We should allow the orthodontists to have output that conforms to the standard projections. Facial Photographs Photographic standards have been formed by the American Board of Orthodontics (ABO). These standards specify the - 76 - WO 99/59106 PCT/US99/10566 input device and the output (size, orientation and included anatomy). For the MedScape product this ABO output can be a minimum requirement so that orthodontists satisfy the ABO requirements. A 35 mm format has been the industry input device standard for years but the ABO has recently allowed the use of digital input devices. The output is 'A size and includes a full frontal face view, lateral face view and a smile view. MedScape may require other projections angles in order to build the 3-D model but can have the standard photographs for orthodontists. These images can be calibrated for color, dimensions, and cross-calibration with the other input devices. . . .. .. . .......... :..:: : Intra-Oral T............ he ABO has extended standards for intra-oral photographs. Photographs The standard projection angles for intra-oral photographs include full frontal with teeth in occlusion, right and left lateral projections of the teeth and maxillary and mandibular occlusal projections. The output images are at full size (1:1 ratio). MedScape may require other projections angles to build the 3-D model but can have the standard photographs for orthodontists. These images can be calibrated for color, dimensions, and cross-calibration with the other input devices. - 77 - WO 99/59106 PCT/US99/10566 N CR Recordings Centric Relation (CR) refers to the 3-D spatial relationship of the mandible relative to the maxilla when the condyles are "seated" in their fossa and the teeth are at an "initial contact" point. CR is used by many clinicians as a treatment planning, treatment reference and treatment starting point. All of our visual input records (photographs and radiographic images) are created when the mandible is in centric occlusion (CO). CO refers to the 3-D spatial relationship of the mandible relative to the maxilla when the teeth are closed in their habitual occlusion. The habitual occlusion allows the teeth to achieve maximum intercuspation (shortest vertical dimension of occlusion). CR may not equal CO and therefore we can be able to record both CO and CR jaw positions and mathematically convert the input data sets from CO to CR by calculating the 3D displacement of the mandible. Conversion from CO to CR: Acquire digital images obtained at one or more visual viewing angles for the face that has embedded tracking markers that define the 3-D spatial attitude -78- WO 99/59106 PCT/US99/10566 of the maxilla and another set of markers that define the spatial attitude of the mandible. Take one set of images with the teeth in CO and another set in CR. The MedScape software tracks the maxillary and mandibular markers and will compute the relative changes in their spatial location (mandibular displacement). The location of these tracking markers can be spatially cross calibrated with the cephalometric 3-D input data. The mandible can be treated as a right bodyThe cephalometric landmark data that describes the mandible can be segmented from the remaining data.. The segmentation allows the mandible to be moved within the data set. The CO4 CR tracking or mandibular displacement data will be applied to the cross calibrated cephalometric data to move the mandible to a CR position for analysis and treatment planning. .. . . . ... .
.......................... .. . ... ................. nature Description For Cephalometric Module Landmark ID Landmarks can correspond with standard orthodontics landmarks, i.e., gonion, nasion, sella, pogonion, etc. These landmarks can be located in their normal position on the morphologically normal skeletal stock object and can be visually evident. These landmarks can be spatially associated to each other through the IMS data base functions. The spatially calibrated cephalometric views (H & M Associate software) can be overlaid on the stock object. The stock object will be customized to fit the cephalometric data by either drag and drop of a stock object landmark to the corresponding ceph. landmark (edited in all available projection views) or by - 79 - WO 99/59106 PCT/US99/10566 digitizing the landmarks on the ceph and then making the association and customization from the stock object through the IMS data base. A combination of the two customization methods can be used. In addition, non-landmark cephalometric data may also be associated with the stock objects. The non-landmark data of most interest are the ridge crests (j.e., orbital rims, rim of the nasal fossa, external borders of the mandible, mandibular canal, sinuses margins etc. The stock objects provide a visual reference of the patients anatomy and the landmarks are the data base portion of the patient's file that describe the features that are unique for that individual. Therefore, only the landmark locations need to be stored in the IMS data base. The landmark locations can serve as a set of instructions for altering the stock objects. Similarly, the transmission of patient landmark location and customizing the stock object at the receiver is more efficient method then transmitting a customized stock object. Use the IMS data base to compile landmark location data to be used to establish normative data for 3D cephalometric analysis and for upgrading the stock model. 2D. Analysis and 2 D. Normative Data A 2D orthodontic cephalometric analysis is based on comparison of the patients' data with 2D normative data bases that have existed for decades. 2D normative data bases include: Burlington growth study, Bolton/Broadbent, Rocky Mountain Data Systems, Michigan Growth Study, to name a few. 2D analysis include: Steiner Analysis, Downs Analysis, Ricketts, Tweed, Alabama, RMDS, Wits, Owens, etc. 2D template analyses are normative 2D visualizations that are overlayed - 80 - WO 99/59106 PCT/US99/10566 over the patients tracing for immediate visual information of deviations from the norms. Standard deviations from the norm are included in most computerized 2D analysis for comparison. One, two & three standard deviations from the "norm" are shown both numerically and graphically. 2D anatomical landmarks of anatomy are well defined in the literature and include: Sella, Nasion, Point A, Point B, pogonion, menton, gnathion, porion, orbitale, articularie, condylion, gonion, etc. -81 - WO 99/59106 PCT/US99/10566 LATERAL CEPHALOMETRIC ANALYSIS POINTS P BA AN A M LL POINTS SELECTED BY INSPECTION A -- A Pnint The deepest point on the curse of the Mexilla between the anterior nasal spine and the dental alveolus. ME Em - Embrasure A point where upper and lower lips meet. Al Incisor Incisal tip of the upper incisor. O - Orbitale A point located at the lowest point AR - incisor Root tip of the upper incisor, on the external border of the orbital cavity, tangent to the A3 - Cuspid Tip of the upper canine. Frankfort plane. ANS - Maxilla Tip of the anterior nasal spine. PNS - Maxilla Tip of the posterior nasal spine. B1 - Incisor Incisal tip of the lower incisor. Pr - Potion A point located at the most superior point of the external BR - Incisor Root tip of the lower incisor, auditory meatus, tangent to the Frankfort plane. 83 - Cuspid Tip of the lower canine. S - Sella The center of the salla turcica. Ba - Basion Most inferior posterior point on selected by inspection. the occipital bone. Pt - Pterygoid Intersection of Inferior border of LL - Lip Most anterior point on the lower Point foremen rotundum with posterior lip (point closest to the esthetic wall of pterygo-maxillary fossa as plane). viewed in lateral head film, IPterygoid Plate). Me - Menton A point located at the lowest point on the midline curve of the Pm - Supra Point selected at the anterior symphysis. Pogonion border of the symphysis between point B and Pogonion where the N - Nasion A point at the anterior limit of the curvature changes from concave nasofrontal suture, to convex. P-A - 82 - WO 99/59106 PCT/US99/10566 FRONTAL TRACING (CEPHALOMETRIC POINTS) FRONTAL CEPHALOMETRIC ANALYSIS POINTS Me Menton/ Point on inferior border of symphysise Mandible directly inferior to mental protuberance 0 A3 -Cuspid Tips of upper permanent canines. (left) and below center of igotium mentali. t orA right) & NC Nasal Points an the outline of the nasal cavity SA6 . Molar Bilateral points on frontal occusal CN at the widest area in frontal perspec S6A plane perpendicular to buccal surfaces live. NC -tef. CN -right. of the crowns of the upper first perma ZL - Zygomatic Zygomatic Bilateral points on the medial nent molars. A6 - left. 6A - right. ZR margin of the zygomatico-frontal suture, at the intersections of the orbits. ZL -i left D AG Mandible Points at lateral interior margin of the ZR -right. GA antignial protuberances. AG left. ZA - Zygoatic Zvgomatic-Center of the root of the AZ zygomatic arch, mid-points. ZA left. B3 - Cuspid Tips of lower permanent canines, ((oft) AZ - right. 38 (right) AN -Anterior Tip of anterior nasal spine just below B6 - Molar Bilateral points on the occiusal plane Nasal the nasal cavity and above the hard 6B perpendicular to buccal surfaces of the Spine palate, crowns of the lower first permanent 1A - Point 1A Selected at the interdental pappitia of molars. BR - left. 68 - right, the upper incisors at the function of JL - Maxila Bilateral points on the jugal process at crowns and gingivae. JR the intersection of the outline of tuber- 1B - Point 1B Selected at the interdental paoptlia of osity and zygomatic buttress. JL - left the lower incisors at the junction of JR - right. crowns and gingiva. In all groups o two, the first entry indicates left, the P-2 second indicates right. MedScapes software can provide all 2D landmark identification, 2D analysis, & 2D growth predictions. Because the orthodontists wants this information for comparison to "traditional" cephalometrics. (See section on landmark ID). In traditional 2D analysis, all bilateral landmarks are averaged to the midline and are created at "standard" magnification due to the projective displacement of the object image (patients' - 83 - WO 99/59106 PCT/US99/10566 head) to the X-ray film. These "errors" can be reproduced when the 3D data is converted to the "traditional" 2-D data set for comparison to "traditional" 2D normative data. The 2D normative data can be adjusted for sex, age, race, size, etc. and created into a graphical representation (template) of normative data for visual comparison. 3D analysis & 3D MedScape was founded on the premise to create, develop and Normative Data offer 3D & 3D Stereoscopic software products to the medical and dental professions. MedScape products will give the doctor the ability to diagnose and treatment plan their patients with three-dimensionally accurate models of anatomy (face, teeth & bones). Three dimensionally accurate analysis and normative data critically depends on the accuracy of anatomical landmark location & identification. ** (See landmark ID) A 3D visual comparison to 3D normative data adjusted for size, sex, race, and age. 3D Normative Data - This data will have to be developed through University research as this information is limited at this time. Grayson's arcticle in the AJO describes some 3-D growth patterns. Also Rick Jacobson gives some 3-D data in Jacobson's Book "Radiographic Cepahlometrics". At this time, 3D analysis will have to be "projected" to a 2D format to compare to "narrative 2D data" since this is what exist at this time. These is some work being done in Australia and Canada on 3D MRI & Ceph data. 3D Analysis of Patient Data - The traditional 2D landmarks, angles, planes, etc. can be viewed on the 3D model for - 84 - WO 99/59106 PCT/US99/10566 comparison. The 3D model will add the advantage of being able to view asymmetries of the right & left side of the face, teeth, and skeletal structure. This is a critical area that is not assessed in "traditional" 2D analyses. ** (See Highlight Abnormal) Centric Relation & centric occlusion will also be viewed in 3D. ** (See CR) ** see segment landmarks ** see convert to CR ** see meas. of soft tissue ** see fuse w/ ceph ** see output from photos ** see landmark tracking over time ** see compute angles ** see compute distances The lingual concavity of the upper and lower incisors are related to the discussion angle and the angle of the eminence. These should be congruent w/ each other. These functional components of TMJ function and dysfunction are important concepts that are critical for proper diagnosis. 3D analysis includes modeling of the critical anatomical areas & for the generic wireframes to adjust to overlay the patient's anatomy. A visual representation of "normal" can be overlayed over the patient's "abnormal" for direct comparison Custom Analysis The doctor will want to customize their analyses to include parts of various 2D & 3D analyses. The doctor can define which components of each to include. MedScape will allow the enduser to define, name, save and employ a custom analysis. This analysis can be implemented as a macro function. Growth Frecasting Growth forecasting has always been a goal of cephalometric - 85 - WO 99/59106 PCT/US99/10566 diagnosis and treatment planning since the early days. It became popular with Ricketts introduction to RMDS growth forecasts. 2D growth forecasting has had limited value. Short term forecasting has been acceptable at times, but long term forecasting has been inaccurate. Rocky Mountain Data Systems (RMDS) in association w/ Dr. Bob Ricketts have the most extensive data base on 2D growth forecasting. Lyle Johnston has developed a template that estimates "normal" growth "averages" in children. Also the Burlington Growth Study is also available along with the Broadbent/Bolton study, Michigan study, & others. All of these are 2D. 3D growth forecasting is yet to be developed and will be a critical area of study and development. ** see highlight abnormal ** see seg. Landmarks ** see landmark tracking over time Visual aTreatment ** see segment landmark groups Objective & Surgical A visual representation of a treatment plan that a Dr. decides Treatment Objective from using study models of the teeth, X-rays (Ceph, Frontal, Pan, etc.), and photographs of the face & teeth. The integration of the 2D ceph with 2D video imaging is now "state of the art". Some attempts have been made to have soft tissue change in relation to changes made to the bones & tooth movements, but are only in 2D (video-ceph integration). A more important treatment planning tool would be to evaluate the soft tissue changes the Dr. & patient desires (in 3D) and see - 86 - WO 99/59106 PCT/US99/10566 what changes NEED to occur in the teeth and skeletal structure to accomplish this soft tissue change. ** see also VTO (implants) STO - is a Surgical Treatment Objective This would be a surgical simulation of the movements of the bones and soft tissue to accomplish what orthodontics alone cannot achieve. ** see also Surgical planning (implants) Orthodontic A sequential set of photographs of the face, teeth, gingival Cooperative tissues with tracking markers could allow the Dr. to track Evaluation & COOPERATION in areas of: Time line tracking of progress 1. Mechanotherapy pro gressi ii! ii 2. Oral hygiene 3. abnormal growth 4. abnormal reaction to forces 5. other Time line tracking would allow the evaluation of progress over time. Patient's ALWAYS ask "When am I getting my braces off'. Accurate 3D evaluation of cooperation and growth or surgical plans with photos would be a GREAT stride forward. Goals of Software: Assess Treatment Progress: a. Exceptional b. Good 1. On Schedule 2. Ahead of Schedule c. Fair/ behind schedule d. Poor/ Delayed Reasons for Progress Assessment: - 87 - WO 99/59106 PCT/US99/10566 a. Poor Co-operation 1. Head Gear 2. Elastic 3. Removable Appliance 4. Patient disinterest 5. Other b. Missed appointments c. Broken Appliances d. Lost Appliance e. Adverse Biological response f. Unexpected complexity of case g. Other Modification of treatment based on Progress assessment and reasons: a. In need of Jaw Surgery U/L both b. TMJ Surgery R/L/B c. Extraction considerations d. Parent Consult 1. Modify Treatment approach 2. Alter Fee 3. Stop Treatment 4. Other AUTO DETECTION Through automated local image analysis, the software CEPHALOMETRIC simplifies the operator task of designating landmarks and LANDMARKS traces. For example when tracing an intensity edge in an image, as long as the user maintains the pointer in the general vicinity of the edge, the software automatically - 88 - WO 99/59106 PCT/US99/10566 finds the edge and traces it without relying on precise pointer movement by the user. Patient Presentation Generic Presentation: To demonstrate possible treatment options, patient education about orthodontics using a generic patient. Custom Presentation: Demonstrate possible treatment options and outcomes using the patient's 3D anatomy. Take Home Presentation: Create a limited software program that will display a 3D model of the patient with the ability to do some minor enhancements (smile data bank). Output floppy disc, video tape. Arch Length Analysis Arch length analysis is a critical diagnostic measurement, as it and Tooth Size... ad Analysis can determine diagnostic decisions of extractions of permanent Discrepancy Analysis teeth to correct certain orthodontic problems vs. non extractions decisions. The teeth can fit within the supporting bone (alveolar bone) to the upper and lower jaw structure. The alveolar bone is the supporting bone that surrounds the roots of the teeth. The basal bone is the main supporting structure for the jaws. The basal bone of the lower jaw (mandible) is limited in size by its genetic potential and has limited ability for growth modification. There are possible growth modifications procedures, such as Functional Jaw Orthopedics that have some limited growth modification potential. The basal bone supports the alveolar bone which supports the teeth. The alveolar bone has the potential for adaptation to the positions of - 89 - WO 99/59106 PCT/US99/10566 the teeth and can be modified as long as the teeth are kept within the limits of the basal bone. The upper jaw (maxilla) has the capability of increasing its transverse dimensions via "rapid Palatal Expansion" appliances. These types of orthopedic appliances not only change the alveolar bone shape and the size but can also change the dimension of the maxillary basal bone dimension due to "sutures" that exist in the upper jaw. The lower jaw does not have sutures associated with the mandibular skeletal structure. The maxilla is therefore capable of being increased in size to allow for more room form crowded or crooked teeth to be aligned into "normal" occlusal fit. Extraction vs. non-extraction of permanent teeth, the decision for a surgical solution (adult) vs. growth modification (child) to resolve "Arch Length" problems is a major diagnostic decision that the orthodontist and/or Oral Surgeon can make. Extraction vs. non-extraction of decisions have traditionally been based on the space requirements of the mandible due to its inability to be changed significantly. Significant arch discrepancy in the lower arch may require extraction of selected permanent teeth to resolve the crowding problem, the orthodontist can then decide which teeth can be removed in the upper jaw, if any, to create a "normal" occlusal fit of the teeth. - 90 - WO 99/59106 PCT/US99/10566 The teeth can fit into this ideal occlusion when the mandible is in a CR or CO position. The Curve of Spee and the Curve of Wilson are three dimensional relationships of the plane of occlusion when viewed from the lateral and frontal planes respectively. The analyses of these relationships of the teeth also are included in the decision making process of the orthodontist as far as the extraction vs. non-extraction treatment decisions. As the Curves "level" out the teeth could be positioned where there is no bone support leading to periodontal (gum) problems. Recession and/or alveolar bone loss could occur if not properly controlled mechanically. In order for the teeth to "fit" normally at the end of treatment, the doctor can evaluate ALL 3D requirements of each arch, TMJ, bone configuration, etc. These include: 1. the sagittal dimensions (length), 2. the transverse dimension (width), and 3. The vertical dimension (height). Dental extraction compensations can be accomplished in order to treat a case without surgery of the jaw structure. This compromised treatment, at times may be acceptable for patients who will not accept the surgical treatment alternative or for medical or other reasons are not candidates for orthognathic surgical procedures. - 91 - WO 99/59106 PCT/US99/10566 Tooth Size Discrepancy: The size of the individual teeth as they are positioned around the "Caternary" type curve of the arch, take upo space. The relative sizes of each tooth type (molars, bicuspids, cuspids, incisors) can be interrelated appropriately or the occlusion of the teeth will not fit properly at the end of treatment. If a discrepancy exists in the relative sizes of certain teeth in the arch, then a so called "Bolton Tooth Size Discrepancy" exists. This tooth size discrepancy can also effect the fit of the occlusion between the opposing arches. It is necessary to know the mesial/distal, buccal/lingual, height measurement, along with the root lengths of the individual teeth. The root length related to biomechanical tooth movement considerations. Bolton tooth size discrepancies are created when there is a mismatch in the size of teeth within the respective arch. This creates a problem of alignment and proper fit of the occlusion. Knowing these discrepancies prior to treatment is critical for orthodontic diagnosis. Limitations in treatment need to be related to the patient as apart of their informed consent. Small lateral incisors, abnormal shape & form, congenital absence are a few problems that create a compromised end result. Restorative dental procedures to correct some of these discrepancies, need to be planned prior to treatment so the patient will be informed and expect follow up care. - 92 - WO 99/59106 PCT/US99/10566 Relapse of teeth after orthodontic correction is a major consideration in orthodontic therapy. Many elaborate treatment alternative have been devised to control relapse. The ability to three-dimensionally diagnose and treatment plan a patient may lead to improved retention of orthodontically treated cases. Level of the Curve of Spee: The Curve of Spee is a curve of the occlusal plane as seen from the lateral view. The Curve of Wilson is the curve or construction of the occlusal plane as view from the frontal. The treatment of these two "curves" are important as to the eventual final result of the occlusion. Orthodontist usually "flatten" these curves during treatment for occlusal correction. Uprighting the Curve of Wilson can lead to increased arch length and help to gain space for crowded teeth, up to the limit of the alveolar bone, cortical bone, and basal bone. Leveling the Curve of Spee is a routine orthodontic biomechanical effect of treatment. This leads to a better fit of the occlusion when the Curve of Spee is leveled. This curve tends to deepen slightly with age, so orthodontists routinely "over correct" the leveling of this curve to a level occlusal plane three-dimensionally. The mathematical relationship exists to a flatten Curve of Spee. This flattening determines the incisal edges of the anterior teeth at one end of the arch and the disto buccal scups tips of the lower second molars on the other end of the arch. By using the X.Y and Z coordinates of the occlusal surfaces of the teeth, a calculation of the arch circumference can be determined. The distance between any 2 points, A - 93 - WO 99/59106 PCT/US99/10566 (xl,y 1 ,zl) & B(x 2 ,Y2,Z 2 ) in space is the magnitude of the vectors connecting them and is give by: AB = (X 2
-X
1
)
2 + (Y 2 -Y 1
)
2 + (Z 2 - Z 1
)
2 Each tooth coordinate measurement represents a point in space. The total arch circumference is the magnitude of the summation of all vectors connecting their points and given by: Ct = (Xi-Xj) 2 + (YYj) 2 + (Zi- Zj) 2 Ct represents the total arch circumference in 3D space and N is the number of teeth measured. To compare to 2D relationships, the planer projection of the total arch circumference is calculated using a similar method except the depth coordinate (Z) i.e., depth of Spee is excluded. CP = (Xi-Xj) 2 + (YYj) 2 Cp represents the planer projection of the total arch circumference to a lateral 2D projected view. Asymmetry Analysis An asymmetry analysis defines the morphology differences between the right and left halves the mandible, maxilla and other regions of the skeleton. The synunmmetry of these structures should be determined through the use of landmark groupings. The procedure may include determination of the sagittal plane midline of the patient by utilizing identifying midline landmarks. The sagittal plane midline can be used to define - 94 - WO 99/59106 PCT/US99/10566 the right and left halves of the patient. The simplest symmetry analysis would be begin with the superimposition of the right and left halves of the mandible utilizing the sagittal plane midline reference as the registration plane. The quantification of the asymmetry would be to compare the x,y,z differences in location of the corresponding right and left landmarks and to compare the location of these landmarks to the normal landmark location (by standard deviation). This type of analysis would allow the end user to quantify the amount of asymmetry and direct the end user to etiology of the asymmetry. For example, when the mandible is asymmetric then it is safe to assume that one side is too small or the contralateral side is too large. Comparison of the patient data with normal data would allow the clinician to determine which side was abnormal. Knowledge of the etiology of asymmetry may be critical in controlling or predicting the outcome of treatment. Additional analysis may include a normalization of the 3-D data to the standard ABO 2-D views and performing an analysis using existing analysis models. Tools may be created to allow the end user to create a symmetry analysis. Fit to Stock Obj.ct The spatially calibrated cephalometric views can be overlaid on the stock object. The stock object will be customized to fit the cephalometric data by either drag and drop of a stock object - 95 - WO 99/59106 PCT/US99/10566 landmark to the corresponding ceph. landmark (edited in all available projection views) or by digitizing the landmarks on the ceph and then making the association and customization of the stock object through the IMS data base. A combination of the two customization methods can be used. In addition, non landmark cephalometric data may also be associated with the stock objects. The non-landmark data of most interest are the ridge crests (i.e., orbital rims, rim of the nasal fossa, external borders of the mandible, etc. These same methods may be employed for other stock objects, such as, the teeth, TMJs etc. Highlight Abnormal The stock objects are a graphical representation of normal. These normal values for landmark location have been determined through an analysis of the landmark locations on many patients (Burlington study) and have been sorted by age and sex of the patient. Deviations from normal can be analyzed and statistically grouped as a standard deviation from normal. Through the use of the IMS data base we can define normal and the standard deviations from normal for individual landmarks and landmark groupings. Following the completion of the customization of the calibrated cephalometric projections to the stock object the IMS data base will perform an assessment of landmarks locations and groupings of landmarks by compare the patient data to normal data through look up tables (LUT) contained in the IMS data base. After this analysis the computer can highlight in color on - 96 - WO 99/59106 PCT/US99/10566 the customized stock object the landmarks or landmark grouping that deviate 1 or more standard deviations from normal. A color can be assigned to indicate the severity of the deviation (1,2 or 3 standard deviations). This will be a very visual method for the end user to identify the locations of the abnormal growth patterns and to convey this information to the patient or colleagues. This analysis can serve as a basis for treatment planning and save the clinician time by achieving a rapid analysis and an easy method for patient communication. No look up tables are required by the doctor. Segment Landmark Landmarks groupings (3 or more landmarks) can be used to Groups Gr ps describe the size and location of an anatomic structure relative to the remaining structures. The segmentation of these grouping allows them to be treated as objects. These landmark groupings are used in analysis of facial growth, development, temporal monitoring of growth and development, superimposition of serially acquired data sets, growth forecasting and creating visual treatment objectives. The MedScape software can have a standard set of groupings that match existing groupings. These current landmark groupings can be extended into the 3-D domain. In addition, the ability for the end-user to define additional groupings can be available. Convert to CR In order to convert CO to CR requires digital images at two or more visual viewing angles for the face that has embedded tracking markers that define the 3-D spatial attitude of the - 97 - WO 99/59106 PCT/US99/10566 maxilla and another set of markers that define the mandible. One set of images are obtained with the teeth in CO and another set in CR. The MedScape software will track the maxillary and mandibular markers and will compute the relative changes in their spatial location relative to each other. The location of these tracking markers can be spatially cross calibrated with the cephalometric 3-D input data. The cephalometric landmark data that describes the mandible can be segmented from the remaining data. The CO CR tracking data will be applied to the cross calibrated cephalometric data to move the mandible to CR. - 98 - WO 99/59106 PCT/US99/10566 Guathological Normal Gnathological normal refers to the cusp fossa spatial relationships, the tooth to tooth relationships among and between the maxillary and mandibular teeth and tooth position relative to the supporting alveolar0 and basal bone. The tooth and its 3-D location and spatial orientation relative to the tooth long axis can be defined through tracking of landmarks located on the root apices or apex, cusp tip(s) or incisal edge and the mesial and distal greatest heights of contour. This specialized segmentation of teeth allows them to function as objects. A database that represents gnathological normal teeth can and be used when rendering the stock object teeth in combination with the skeleton. Deviations from the gnathological normal can be described in a similar fashion to the method used for cephalometric analysis. A pseudo-colored visual display of the anatomy that falls outside the statistical normal will facilitate a quick identification of abnormal tooth position, etc. Airway Analysis The airway can be divided into the nasal airway, the nasal pharynx and oropharvynx. The nasal airway begins at the anterior opening of the nasal passage and ends at the posterior margin of the nasal fossa. The nasopharynx begins at the posterior margin of the nasal fossa and ends at the most inferior area of the soft palate. The oropharynx begins at the inferior margin of the soft palate and ends at the superior margin of the valecula. An airway analysis includes a mathematical description of nasal septum symmetry about the - 99 - WO 99/59106 PCT/US99/10566 midline, size of the inferior turbinates, anteroposterior dimensions of the oral and nasal pharynx. The presence of adenoids, pharyngeal tonsils, tongue volume, soft palate length, and curvature of the air way. Photogr Aphs for Cephalometric Module Fuse with Cph. A fusion of 3-D cephalometric and 3-D facial soft tissue data in the same data base will be available. This fusion will provide the ability to analyze and visualize the spatial relationships of the soft tissues and the hard tissues. The fusion will occur in the IMS database. Measure Soft Tissue Selected features within the soft tissue data base can be measured. This would include soft tissues thickness and point to point surface measurement. The thickness of the soft tissue is the distance from the skin surface to the nearest point on the surface of the skeleton. The soft tissue thickness overlying the surface skeletal landmarks can be measured. In addition, other landmarks (to be determined) that define the critical soft tissue elements that are modified as a result of aging, normal growth and development, orthodontic treatment and orthognathic surgery. A special data base can be created in the IMS to sequester the soft tissue thickness data for future analysis and to determine the normal and abnormal soft tissue features and their dynamic relationships to the subjacent skeleton. - 100 - WO 99/59106 PCT/US99/10566 Output of Formatted The ABO has a standard for intra and extra oral photographs. .po The IMS can satisfy the ABO format. Because of the data base the formatted photographs can be identified through the data base. For example, in the 8 up series, the upper left photograph is a lateral view of the head, the upper center view is a frontal view of the face and the upper right view is a frontal smile view of the face. The data base can identify the content of mount by when a standard mount format is used. This would allow the end user to search the data base and retrieve images by their attributes. For example, the end user may request that all lateral face photographs be retrieved and displayed in order by date extending from right to left. The mounted images can be output to any standard print device supported by MS Windows 95, such as a dye sublimation printer. Landmark Tracking Knowledge of the amount of change in landmark location over over Time . . . . . . Time is critical when determining normal growth and development and treatment outcomes. Analysis of a change in position of the landmarks over time requires that pre-selected landmarks or landmark groupings be segmented from the remaining landmarks for an independent analysis. For example, to determine the changes that occurred in the mandible over time requires that the landmarks used to describe the mandible are segmented from the other landmarks (maxilla, etc.) and that mandible at time A is superimposed over the mandible at time B. The superimposition occurs in 3 D and the registration of the Time A and B mandibles can be - 101 - WO 99/59106 PCT/US99/10566 defined. The two mandibles may can be superimposed over a mathematically constructed sagittal plane midline, or the mandibular canals or the inferior borders of the mandible or the condyles, etc. The quantification of change will occur through the 3-D tracking of the location of the associated landmarks on mandible A and B. These differences will be tracked and recorded in the IMS data base. This IMS database can be used to assist in developing 3-D normative data. - 102 - WO 99/59106 PCT/US99/10566 Appendix F The following describes alternative embodiments of the invention. The problems: Accurate 3 dimensional models are not available to all segments of medicine and dentistry because preferred image acquisition tools may not provide data in a format that is easily converted to 3D. When considering 3D techniques the data density is so great that it requires special platforms for data handling and complex algorithms for data reduction. User interface, etc. Summary: Three basic imaging software modules (Sculptor, Clinician and Executor) comprise the Acuscape suite of software designed for the medical use. These software packages are further customized with application specific software to provide benefit to specific medical and dental disciplines. In combination these three software packages and associated application software produce spatially accurate 3 dimensional replicas (.pro file) of patient anatomy that allow for the extraction of clinically relevant data, .pro file manipulation, storage, measurement, modification and display for the purposes that include diagnosis, treatment planning, treatment simulation and outcomes measurements. The Sculptor is used at an image processing center (server) and passes the acquired images and measurement files (.scl files) to the Clinician user (client) for the generation of the .pro file and subsequent use. - 103 - WO 99/59106 PCT/US99/10566 Sculptor Module: Images are acquired directly into a patient session file from input devices that include digital cameras, flat bed scanners, x-ray sensors, etc. or from image storage files. An Acuscape image calibration frame is worn during image acquisition and shadows of the calibration markers are embedded on the resultant images. The images are first spatially calibrated and a patient centric co-ordinate system is transferred to the images. This co-ordinate system is adjusted or optimized to best fit the patients anatomy. Part of this adjustment superimposes the y-z plane of the co-ordinate system to superimpose on the patient's mid sagittal plane. The subsequent measurements store data utilizing this constructed co-ordinate system. The calibrated images can be stored by the executor or displayed and measured. Multiple images or image sets can be combined in a common 3-D database, displayed at as a combined set and can be selectively enhanced for improved measurements. These enhancements include magnification and equalization of selected image regions. Three space measurements can be performed as point, closed loop trace and linear trace measurements. The measurement routine occurs simultaneous on all images displayed in the sculptor. The selected image is measured and a corresponding epipolar line is constructed on the adjacent images to assist with locating the same point on that image. The z,y and z locations of all of the measurement points and lines (series of points) are stored in a measurement file. The measurement files are converted to an export file that contains all .jpg images - 104 - WO 99/59106 PCT/US99/10566 and a .scl file. The .scl files contain the calibration information, camera parameters and the x,y and z locations of all traces and landmarks. The sculptor facilitates the measurements on all calibrated or cross calibrated images. Cross calibration refers to calibrating multiple images and image types to the same 3D co-ordinate system. These images can include but are not limited to x-rays, tomographs, CT scans, visual band images, MRI, ultrasound, infrared and radar. The image type and projection angulation will be dictated by the intended purpose of the imaging study. This is an application specific program that has been optimized to facilitate the original imaging goals. The Sculptor will be used to calibrate and measure the images (spatially, color or gray scale value). Executor Module: This module works in the background to manage images for the Sculptor and Clinician modules. This is a patient centric relational data base with multiple tables that stores, retrieves and transports patient image files. Patient file transport files contain the .jpg images and an .scl file. The .scl file is created by the Sculptor and transported to the Clinician by any number of means including modem, internet and floppy, etc. Clinician Module: The module is intended to exist primarily in the doctor's office (end user). The Clinician will receive the transport file from the Sculptor via the Executor. The patient specific measurements contained within the .scl file are used by the morph editor, a sub-section of the Clinician, to morph a - 105 - WO 99/59106 PCT/US99/10566 "stock model" to spatially match the patient's measurements. The "stock model" is a generic wireframe representation of the anatomy to be modeled. The measurements include specific linear traces, closed traces, landmarks and control points. The measurement locations are pre-programmed into the Sculptor and the corresponding locations are programmed to the corresponding points on the stock wire frame of the Clinician. For example, the orthodontic application includes traces of ridge crests (orbits, mandibular borders, etc.), landmarks (nasion, Sella, etc.) and control points (tooth cusp tips, etc.). These measurement locations and names associated with their precise locations on the "stock model" are contained within the Clinician's database. The Clinician's knowledge of the precise wireframe vertices associated with the locations of landmarks, traces and control points facilitates the automation of using the .scl file to morph a generic stock model to a patient specific model (.pro file). A specific .pro file can be retrieved via the Executor and displayed, analyzed and manipulated as a solid model in the Clinician module. The .pro file will exist as a collection of anatomic "objects". These objects will include anatomic structures, such as each tooth, landmarks and reference planes. The spatial location of all objects are known and tracked by the Clinician's database. The .pro file possesses an x,y,z co-ordinate system referred to a the global co ordinate system while each object possesses its own co-ordinate system referred to as a local co-ordinate system. The Clinician's database is monitoring the spatial location of the pro file and its sub-objects via their co-ordinate locations. The .pro file and/or any of its objects can be translated or rotated along their co - 106 - WO 99/59106 PCT/US99/10566 ordinate axes. The movements will occur along the default global co-ordinate system unless an object or group of objects has been selected then the movement occurs along the selected local co-ordinate system. The orthodontic application uses a stock model of the head and portions of the neck. This stock model includes, but is not limited to, the associated skeleton, soft facial soft tissues, temporomandibular joints and teeth. In the examples shown this model currently contains more than 300 objects that can be manipulated in the clinician module to facilitate the kinds of tasks routinely undertaken by an orthodontist. - 107 -

Claims (28)

The Claims What is claimed is:
1. An apparatus for calibrating medical images of a patient comprising: a headgear for mounting to a patient's head, and a plurality of calibration targets mounted on the headgear.
2. The apparatus of claim 1 in which the headgear is size adjustable.
3. The apparatus of claim 1 in which the headgear comprises a rigid portion to which the plurality of calibration targets are mounted.
4. The apparatus of claim 3 in which the plurality of calibration targets are mounted so as to reduce the amount of overlapping when an image of the
appartus is captured.
5. The apparatus of claim 1 in which the plurality of calibration targets are spherical.
6. The apparatus of claim 1 in which the plurality of calibration targets comprise at least one of BBs and bearings.
7. A computer apparatus comprising: a computer having a processor and memory; and
software for execution on the processor, comprising a module for receiving
image data for a patient, for establishing a reference frame to relate anatomic locations of the patient, and for generating a 3D patient specific model from a stock model using the related anatomic locations.
8. The computer apparatus of claim 7 in which the module creates a data file
containing patient specific information for transfer to another module.
9. A computer apparatus comprising: a computer having a processor and memory; and software for execution on the processor, comprising a module for receiving
patient specific images and patient specific information, the software having access to a generic three-dimensional model, the software for
customizing the generic three-dimensional model using the patient specific information to form a customized three-dimensional model of at least a portion of the patient's anatomy.
10. The apparatus of claim 9 in which location of one or more particular three- dimensional model vertices is changed in accordance with the patient specific
information.
11. The apparatus of claim 10 in which the locations of other three-dimensional model vertices are changed to be conform with the change in location of the
particular three-dimensional model vertices.
12. The apparatus of claim 9 in which the generic three-dimensional model
comprises a plurality of objects each having an individual object coordinate system referenced to a model coordinate system.
109 -
13. The apparatus of claim 12 in which the software further comprises a three- dimensional model viewer capable of manipulating the objects individually or collectively.
14. A method of capturing and handling medical images, the method comprising
the steps of: mounting a calibration frame on a patient; capturing images of the patient and the calibration frame from different perspectives; and
storing images from a patient resulting from a plurality of sessions in
respectively separate portions of a file management system so that all patient related data is available in a single entity.
15. A method of processing medical images comprising using corresponding
points on different images of a patient to establish a patient centric coordinate system with respect to which a three-dimensional model is referenced.
16. The method of claim 15 in which the corresponding points are determined
using images of a calibration target.
17. The method of claim 16 in which images of a calibration target are
determined using blob analysis.
18. The method of claim 15 in which the corresponding points are used to
identify the location of a calibration target in three-dimensional space.
19. The method of claim 15 further comprising the step of referencing points on the medical images to the patient centric coordinate system.
20. The method of claim 15 further comprising the step of modifying the
location of particular points on the three-dimensional model to correspond to patient specific information and morphing the three-dimensional model to adjust
the locations of other points correspondingly.
21. A method of analyzing patient images comprising the steps of:
identifying one or more anatomical locations in a plurality of patient images from a particular patient imaging session;
determining anatomical locations information in a relative to a reference frame established from the plurality of patient images; and
using the anatomical location information for patient treatment planning and execution.
22. The method of claim 21 in which an anatomical location is identified by a plurality of points constituting an outline trace of the a portion of the patient's
anatomy.
23. A method of identifying corresponding points in a plurality ofimages which
constitute different views of a three-dimensional space, the method comprising the steps of:
determining a common reference frame for the plurality ofimages; selecting one point on one image;
generating a line through that point;
- Ill - displaying a projection of the line in at least one of the other of the plurality
ofimages; and using the projection of the line in at least one of the other of the plurality of images to identify a corresponding point in that image.
24. A method of identifying the location of a point in a three-dimensional space using two dimensional images which are substantially projections of a three-
dimensional model, comprising the steps of: selecting a particular point on one of the two dimensional images; selecting the same point on at least one other of the two dimensional images;
identifying the associated vertex on the model of the particular point on the
images; determining the selected point location by interpolating between known
locations of model vertices based on interpolation using pixel count between the particular point and one or more sets of adjacent vertices.
25. The method of claim 24 in which distance between two points is determined
by identifying the locations of the two points and determining line length using
the coordinates determined for those points.
26. A system for using patient image information, comprising:
a computer configured to customize a generic three-dimensional model using patient specific information; a network linking the computer to a second computer; and the second computer configured to receive the patient specific information and to customize a locally stored generic three-dimensional model using the patient information.
27. A computer program product, comprising: a memory medium; and a computer program stored on the memory medium, the computer program
comprising instructions for using corresponding points on different
images of a patient to establish a common reference frame, the computer
program further for determining the relative location of anatomical locations of the anatomy of the patient using the different images and the common reference frame.
28. A computer program product, comprising:
a memory medium; and a computer program stored on the memory medium, the computer program
comprising instructions for identifying one or more anatomic location in a
plurality of patient images from a particular patient imaging session,
calculating anatomic location information in a common reference frame
established from the plurality ofimages, and using the location information for patient treatment planning.
AU40769/99A 1998-05-13 1999-05-13 Method and apparatus for generating 3d models from medical images Abandoned AU4076999A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US8537298P 1998-05-13 1998-05-13
US60085372 1998-05-13
PCT/US1999/010566 WO1999059106A1 (en) 1998-05-13 1999-05-13 Method and apparatus for generating 3d models from medical images

Publications (1)

Publication Number Publication Date
AU4076999A true AU4076999A (en) 1999-11-29

Family

ID=22191192

Family Applications (1)

Application Number Title Priority Date Filing Date
AU40769/99A Abandoned AU4076999A (en) 1998-05-13 1999-05-13 Method and apparatus for generating 3d models from medical images

Country Status (4)

Country Link
EP (1) EP1027681A4 (en)
AU (1) AU4076999A (en)
CA (1) CA2296274A1 (en)
WO (1) WO1999059106A1 (en)

Families Citing this family (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7234937B2 (en) 1999-11-30 2007-06-26 Orametrix, Inc. Unified workstation for virtual craniofacial diagnosis, treatment planning and therapeutics
DE10009166A1 (en) * 2000-02-26 2001-08-30 Philips Corp Intellectual Pty Procedure for the localization of objects in interventional radiology
FI109653B (en) * 2000-10-11 2002-09-30 Instrumentarium Corp Method and apparatus for photographing a patient's head portion
EP2039321B1 (en) 2000-11-08 2013-01-02 Institut Straumann AG Surface recording and generation
US7715602B2 (en) 2002-01-18 2010-05-11 Orthosoft Inc. Method and apparatus for reconstructing bone surfaces during surgery
AU2003201572A1 (en) * 2002-01-16 2003-09-02 Orthosoft Inc. Method and apparatus for reconstructing bone surfaces during surgery
EP1348394B1 (en) 2002-03-27 2006-02-22 BrainLAB AG Planning or navigation assistance by generic obtained patient data with two-dimensional adaptation
ATE357190T1 (en) 2002-03-27 2007-04-15 Brainlab Ag MEDICAL NAVIGATION OR PRE-OPERATIVE TREATMENT PLANNING SUPPORTED BY GENERIC PATIENT DATA
US7787932B2 (en) 2002-04-26 2010-08-31 Brainlab Ag Planning and navigation assistance using two-dimensionally adapted generic and detected patient data
CN1308897C (en) * 2002-09-15 2007-04-04 深圳市泛友科技有限公司 Method for forming new three-dimensional model using a group of two-dimensional photos and three-dimensional library
FI113615B (en) 2002-10-17 2004-05-31 Nexstim Oy Three-dimensional modeling of skull shape and content
US20040086082A1 (en) * 2002-11-05 2004-05-06 Eastman Kodak Company Method for automatically producing true size radiographic image
DE10252298B3 (en) 2002-11-11 2004-08-19 Mehl, Albert, Prof. Dr. Dr. Process for the production of tooth replacement parts or tooth restorations using electronic tooth representations
US20040166462A1 (en) 2003-02-26 2004-08-26 Align Technology, Inc. Systems and methods for fabricating a dental template
ITRM20030184A1 (en) * 2003-04-22 2004-10-23 Provincia Italiana Della Congregazi One Dei Figli METHOD FOR AUTOMATED DETECTION AND SIGNALING
US7873403B2 (en) 2003-07-15 2011-01-18 Brainlab Ag Method and device for determining a three-dimensional form of a body from two-dimensional projection images
EP1570800B1 (en) * 2004-03-01 2007-04-11 BrainLAB AG Method and device for determining the symmetrical plane of a three dimensional object
US7477776B2 (en) 2004-03-01 2009-01-13 Brainlab Ag Method and apparatus for determining a plane of symmetry of a three-dimensional object
GB0504172D0 (en) * 2005-03-01 2005-04-06 King S College London Surgical planning
WO2006116488A2 (en) * 2005-04-25 2006-11-02 Xoran Technologies, Inc. Ct system with synthetic view generation
WO2007017642A1 (en) * 2005-08-05 2007-02-15 Depuy Orthopädie Gmbh Computer assisted surgery system
US8862200B2 (en) 2005-12-30 2014-10-14 DePuy Synthes Products, LLC Method for determining a position of a magnetic source
US7525309B2 (en) 2005-12-30 2009-04-28 Depuy Products, Inc. Magnetic sensor array
US9173661B2 (en) 2006-02-27 2015-11-03 Biomet Manufacturing, Llc Patient specific alignment guide with cutting surface and laser indicator
US7967868B2 (en) 2007-04-17 2011-06-28 Biomet Manufacturing Corp. Patient-modified implant and associated method
US20150335438A1 (en) 2006-02-27 2015-11-26 Biomet Manufacturing, Llc. Patient-specific augments
US9918740B2 (en) 2006-02-27 2018-03-20 Biomet Manufacturing, Llc Backup surgical instrument system and method
US8133234B2 (en) 2006-02-27 2012-03-13 Biomet Manufacturing Corp. Patient specific acetabular guide and method
US8858561B2 (en) 2006-06-09 2014-10-14 Blomet Manufacturing, LLC Patient-specific alignment guide
US8864769B2 (en) 2006-02-27 2014-10-21 Biomet Manufacturing, Llc Alignment guides with patient-specific anchoring elements
US9113971B2 (en) 2006-02-27 2015-08-25 Biomet Manufacturing, Llc Femoral acetabular impingement guide
US8092465B2 (en) 2006-06-09 2012-01-10 Biomet Manufacturing Corp. Patient specific knee alignment guide and associated method
US8377066B2 (en) 2006-02-27 2013-02-19 Biomet Manufacturing Corp. Patient-specific elbow guides and associated methods
US8568487B2 (en) 2006-02-27 2013-10-29 Biomet Manufacturing, Llc Patient-specific hip joint devices
US9345548B2 (en) 2006-02-27 2016-05-24 Biomet Manufacturing, Llc Patient-specific pre-operative planning
US8608749B2 (en) 2006-02-27 2013-12-17 Biomet Manufacturing, Llc Patient-specific acetabular guides and associated instruments
US8608748B2 (en) 2006-02-27 2013-12-17 Biomet Manufacturing, Llc Patient specific guides
US9289253B2 (en) 2006-02-27 2016-03-22 Biomet Manufacturing, Llc Patient-specific shoulder guide
US8407067B2 (en) 2007-04-17 2013-03-26 Biomet Manufacturing Corp. Method and apparatus for manufacturing an implant
US9339278B2 (en) 2006-02-27 2016-05-17 Biomet Manufacturing, Llc Patient-specific acetabular guides and associated instruments
US9907659B2 (en) 2007-04-17 2018-03-06 Biomet Manufacturing, Llc Method and apparatus for manufacturing an implant
US10278711B2 (en) 2006-02-27 2019-05-07 Biomet Manufacturing, Llc Patient-specific femoral guide
US8535387B2 (en) 2006-02-27 2013-09-17 Biomet Manufacturing, Llc Patient-specific tools and implants
US8591516B2 (en) 2006-02-27 2013-11-26 Biomet Manufacturing, Llc Patient-specific orthopedic instruments
US8603180B2 (en) 2006-02-27 2013-12-10 Biomet Manufacturing, Llc Patient-specific acetabular alignment guides
US9795399B2 (en) 2006-06-09 2017-10-24 Biomet Manufacturing, Llc Patient-specific knee alignment guide and associated method
US8068648B2 (en) 2006-12-21 2011-11-29 Depuy Products, Inc. Method and system for registering a bone of a patient with a computer assisted orthopaedic surgery system
EP1959391A1 (en) * 2007-02-13 2008-08-20 BrainLAB AG Determination of the three dimensional contour path of an anatomical structure
ATE518174T1 (en) * 2007-04-19 2011-08-15 Damvig Develop Future Aps METHOD FOR PRODUCING A REPRODUCTION OF AN ENCAPSULATED THREE-DIMENSIONAL PHYSICAL OBJECT AND OBJECTS OBTAINED BY THE METHOD
EP1982652A1 (en) * 2007-04-20 2008-10-22 Medicim NV Method for deriving shape information
EP2164424B1 (en) 2007-06-29 2019-02-27 3M Innovative Properties Company Graphical user interface for computer-assisted margin marking on dentition
US8265949B2 (en) 2007-09-27 2012-09-11 Depuy Products, Inc. Customized patient surgical plan
US20090088763A1 (en) 2007-09-30 2009-04-02 Aram Luke J Customized Patient-Specific Bone Cutting Block with External Reference
US8357111B2 (en) 2007-09-30 2013-01-22 Depuy Products, Inc. Method and system for designing patient-specific orthopaedic surgical instruments
US8092215B2 (en) 2008-05-23 2012-01-10 Align Technology, Inc. Smile designer
WO2010031404A1 (en) * 2008-09-18 2010-03-25 3Shape A/S Tools for customized design of dental restorations
IT1392871B1 (en) * 2009-02-26 2012-04-02 Fiorini METHOD AND SURGICAL TRAINING APPARATUS
DE102009028503B4 (en) 2009-08-13 2013-11-14 Biomet Manufacturing Corp. Resection template for the resection of bones, method for producing such a resection template and operation set for performing knee joint surgery
US8632547B2 (en) 2010-02-26 2014-01-21 Biomet Sports Medicine, Llc Patient-specific osteotomy devices and methods
US9066727B2 (en) 2010-03-04 2015-06-30 Materialise Nv Patient-specific computed tomography guides
TWI387315B (en) * 2010-06-29 2013-02-21 Acer Inc Three dimensional liquid crystal shutter glasses
US9271744B2 (en) 2010-09-29 2016-03-01 Biomet Manufacturing, Llc Patient-specific guide for partial acetabular socket replacement
US9968376B2 (en) 2010-11-29 2018-05-15 Biomet Manufacturing, Llc Patient-specific orthopedic instruments
WO2012109596A1 (en) 2011-02-11 2012-08-16 Embrey Cattle Company System and method for modeling a biopsy specimen
WO2012117122A1 (en) * 2011-03-01 2012-09-07 Dolphin Imaging Systems, Llc System and method for generating profile change using cephalometric monitoring data
US9241745B2 (en) 2011-03-07 2016-01-26 Biomet Manufacturing, Llc Patient-specific femoral version guide
US8650005B2 (en) * 2011-04-07 2014-02-11 Dolphin Imaging Systems, Llc System and method for three-dimensional maxillofacial surgical simulation and planning
US8417004B2 (en) 2011-04-07 2013-04-09 Dolphin Imaging Systems, Llc System and method for simulated linearization of curved surface
US8715289B2 (en) 2011-04-15 2014-05-06 Biomet Manufacturing, Llc Patient-specific numerically controlled instrument
US9675400B2 (en) 2011-04-19 2017-06-13 Biomet Manufacturing, Llc Patient-specific fracture fixation instrumentation and method
US8956364B2 (en) 2011-04-29 2015-02-17 Biomet Manufacturing, Llc Patient-specific partial knee guides and other instruments
US8668700B2 (en) 2011-04-29 2014-03-11 Biomet Manufacturing, Llc Patient-specific convertible guides
US8532807B2 (en) 2011-06-06 2013-09-10 Biomet Manufacturing, Llc Pre-operative planning and manufacturing method for orthopedic procedure
US9084618B2 (en) 2011-06-13 2015-07-21 Biomet Manufacturing, Llc Drill guides for confirming alignment of patient-specific alignment guides
US8764760B2 (en) 2011-07-01 2014-07-01 Biomet Manufacturing, Llc Patient-specific bone-cutting guidance instruments and methods
US20130001121A1 (en) 2011-07-01 2013-01-03 Biomet Manufacturing Corp. Backup kit for a patient-specific arthroplasty kit assembly
US8597365B2 (en) 2011-08-04 2013-12-03 Biomet Manufacturing, Llc Patient-specific pelvic implants for acetabular reconstruction
US9295497B2 (en) 2011-08-31 2016-03-29 Biomet Manufacturing, Llc Patient-specific sacroiliac and pedicle guides
US9066734B2 (en) 2011-08-31 2015-06-30 Biomet Manufacturing, Llc Patient-specific sacroiliac guides and associated methods
US9386993B2 (en) 2011-09-29 2016-07-12 Biomet Manufacturing, Llc Patient-specific femoroacetabular impingement instruments and methods
US10734116B2 (en) 2011-10-04 2020-08-04 Quantant Technology, Inc. Remote cloud based medical image sharing and rendering semi-automated or fully automated network and/or web-based, 3D and/or 4D imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard X-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
US9554910B2 (en) 2011-10-27 2017-01-31 Biomet Manufacturing, Llc Patient-specific glenoid guide and implants
KR20130046337A (en) 2011-10-27 2013-05-07 삼성전자주식회사 Multi-view device and contol method thereof, display apparatus and contol method thereof, and display system
WO2013062848A1 (en) 2011-10-27 2013-05-02 Biomet Manufacturing Corporation Patient-specific glenoid guides
US9451973B2 (en) 2011-10-27 2016-09-27 Biomet Manufacturing, Llc Patient specific glenoid guide
US9301812B2 (en) 2011-10-27 2016-04-05 Biomet Manufacturing, Llc Methods for patient-specific shoulder arthroplasty
US9408686B1 (en) 2012-01-20 2016-08-09 Conformis, Inc. Devices, systems and methods for manufacturing orthopedic implants
US9237950B2 (en) 2012-02-02 2016-01-19 Biomet Manufacturing, Llc Implant with patient-specific porous structure
US10299898B2 (en) 2012-06-15 2019-05-28 Vita Zahnfabrik H. Rauter Gmbh & Co. Kg Method for preparing a partial or full dental prosthesis
US8903496B2 (en) 2012-08-31 2014-12-02 Greatbatch Ltd. Clinician programming system and method
US8868199B2 (en) 2012-08-31 2014-10-21 Greatbatch Ltd. System and method of compressing medical maps for pulse generator or database storage
US9507912B2 (en) 2012-08-31 2016-11-29 Nuvectra Corporation Method and system of simulating a pulse generator on a clinician programmer
US9471753B2 (en) 2012-08-31 2016-10-18 Nuvectra Corporation Programming and virtual reality representation of stimulation parameter Groups
US9594877B2 (en) 2012-08-31 2017-03-14 Nuvectra Corporation Virtual reality representation of medical devices
US10668276B2 (en) 2012-08-31 2020-06-02 Cirtec Medical Corp. Method and system of bracketing stimulation parameters on clinician programmers
US9180302B2 (en) 2012-08-31 2015-11-10 Greatbatch Ltd. Touch screen finger position indicator for a spinal cord stimulation programming device
US9375582B2 (en) 2012-08-31 2016-06-28 Nuvectra Corporation Touch screen safety controls for clinician programmer
US9615788B2 (en) 2012-08-31 2017-04-11 Nuvectra Corporation Method and system of producing 2D representations of 3D pain and stimulation maps and implant models on a clinician programmer
US9259577B2 (en) 2012-08-31 2016-02-16 Greatbatch Ltd. Method and system of quick neurostimulation electrode configuration and positioning
US8983616B2 (en) 2012-09-05 2015-03-17 Greatbatch Ltd. Method and system for associating patient records with pulse generators
US9767255B2 (en) 2012-09-05 2017-09-19 Nuvectra Corporation Predefined input for clinician programmer data entry
US9204977B2 (en) 2012-12-11 2015-12-08 Biomet Manufacturing, Llc Patient-specific acetabular guide for anterior approach
US9060788B2 (en) 2012-12-11 2015-06-23 Biomet Manufacturing, Llc Patient-specific acetabular guide for anterior approach
US9839438B2 (en) 2013-03-11 2017-12-12 Biomet Manufacturing, Llc Patient-specific glenoid guide with a reusable guide holder
US9579107B2 (en) 2013-03-12 2017-02-28 Biomet Manufacturing, Llc Multi-point fit for patient specific guide
US9498233B2 (en) 2013-03-13 2016-11-22 Biomet Manufacturing, Llc. Universal acetabular guide and associated hardware
US9826981B2 (en) 2013-03-13 2017-11-28 Biomet Manufacturing, Llc Tangential fit of patient-specific guides
US9517145B2 (en) 2013-03-15 2016-12-13 Biomet Manufacturing, Llc Guide alignment system and method
KR101821284B1 (en) 2013-08-22 2018-01-23 비스포크, 인코포레이티드 Method and system to create custom products
US20150112349A1 (en) 2013-10-21 2015-04-23 Biomet Manufacturing, Llc Ligament Guide Registration
US10282488B2 (en) 2014-04-25 2019-05-07 Biomet Manufacturing, Llc HTO guide with optional guided ACL/PCL tunnels
US9408616B2 (en) 2014-05-12 2016-08-09 Biomet Manufacturing, Llc Humeral cut guide
US9561040B2 (en) 2014-06-03 2017-02-07 Biomet Manufacturing, Llc Patient-specific glenoid depth control
US9839436B2 (en) 2014-06-03 2017-12-12 Biomet Manufacturing, Llc Patient-specific glenoid depth control
US9833245B2 (en) 2014-09-29 2017-12-05 Biomet Sports Medicine, Llc Tibial tubercule osteotomy
US9826994B2 (en) 2014-09-29 2017-11-28 Biomet Manufacturing, Llc Adjustable glenoid pin insertion guide
US11147652B2 (en) 2014-11-13 2021-10-19 Align Technology, Inc. Method for tracking, predicting, and proactively correcting malocclusion and related issues
US9820868B2 (en) 2015-03-30 2017-11-21 Biomet Manufacturing, Llc Method and apparatus for a pin apparatus
US10226262B2 (en) 2015-06-25 2019-03-12 Biomet Manufacturing, Llc Patient-specific humeral guide designs
US10568647B2 (en) 2015-06-25 2020-02-25 Biomet Manufacturing, Llc Patient-specific humeral guide designs
WO2017011337A1 (en) * 2015-07-10 2017-01-19 Quantant Technology Inc. Remote cloud based medical image sharing and rendering
FR3048541B1 (en) * 2016-03-01 2019-09-27 Lyra Holding VIRTUAL CHANGE OF A PERSON'S TOOTH
US10722310B2 (en) 2017-03-13 2020-07-28 Zimmer Biomet CMF and Thoracic, LLC Virtual surgery planning system and method
US11207135B2 (en) 2017-07-12 2021-12-28 K2M, Inc. Systems and methods for modeling spines and treating spines based on spine models
EP3651637A4 (en) 2017-07-12 2021-04-28 K2M, Inc. Systems and methods for modeling spines and treating spines based on spine models
US11000334B1 (en) 2017-07-12 2021-05-11 K2M, Inc. Systems and methods for modeling spines and treating spines based on spine models
GB2565306A (en) * 2017-08-08 2019-02-13 Vision Rt Ltd Method and apparatus for measuring the accuracy of models generated by a patient monitoring system
US10874460B2 (en) * 2017-09-29 2020-12-29 K2M, Inc. Systems and methods for modeling spines and treating spines based on spine models
US10892058B2 (en) 2017-09-29 2021-01-12 K2M, Inc. Systems and methods for simulating spine and skeletal system pathologies
US10438351B2 (en) 2017-12-20 2019-10-08 International Business Machines Corporation Generating simulated photographic anatomical slices
US10614570B2 (en) 2017-12-20 2020-04-07 International Business Machines Corporation Medical image exam navigation using simulated anatomical photographs
US10521908B2 (en) 2017-12-20 2019-12-31 International Business Machines Corporation User interface for displaying simulated anatomical photographs
US11589949B1 (en) * 2018-04-05 2023-02-28 MirrorMe3D, LLC System and methods of creating a 3D medical representation for use in performing reconstructive surgeries
IT201800006261A1 (en) * 2018-06-13 2019-12-13 PROCEDURE FOR MAKING AN ANATOMICAL PROTOTYPE
US11051829B2 (en) 2018-06-26 2021-07-06 DePuy Synthes Products, Inc. Customized patient-specific orthopaedic surgical instrument
US11636650B2 (en) 2018-09-24 2023-04-25 K2M, Inc. System and method for isolating anatomical features in computerized tomography data
CN110060287B (en) * 2019-04-26 2021-06-15 北京迈格威科技有限公司 Face image nose shaping method and device
EP3962398A4 (en) 2019-05-02 2023-01-18 DePuy Ireland Unlimited Company Orthopaedic implant placement system and method
WO2023156447A1 (en) * 2022-02-18 2023-08-24 3Shape A/S Method of generating a training data set for determining periodontal structures of a patient
CN114596289B (en) * 2022-03-11 2022-11-22 北京朗视仪器股份有限公司 Mouth point detection method based on soft tissue contour line sampling points

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5005578A (en) * 1986-12-16 1991-04-09 Sam Technology, Inc. Three-dimensional magnetic resonance image distortion correction method and system
US4991579A (en) * 1987-11-10 1991-02-12 Allen George S Method and apparatus for providing related images over time of a portion of the anatomy using fiducial implants
US5099846A (en) * 1988-12-23 1992-03-31 Hardy Tyrone L Method and apparatus for video presentation from a variety of scanner imaging sources
AU4950290A (en) * 1989-01-24 1990-08-24 Dolphin Imaging Systems Inc. Method and apparatus for generating cephalometric images
US5257203A (en) * 1989-06-09 1993-10-26 Regents Of The University Of Minnesota Method and apparatus for manipulating computer-based representations of objects of complex and unique geometry
CA2051939A1 (en) * 1990-10-02 1992-04-03 Gary A. Ransford Digital data registration and differencing compression system
EP0931516B1 (en) * 1990-10-19 2008-08-20 St. Louis University Surgical probe locating system for head use
US5274551A (en) * 1991-11-29 1993-12-28 General Electric Company Method and apparatus for real-time navigation assist in interventional radiological procedures
US5273429A (en) * 1992-04-03 1993-12-28 Foster-Miller, Inc. Method and apparatus for modeling a dental prosthesis
JPH07508449A (en) * 1993-04-20 1995-09-21 ゼネラル・エレクトリック・カンパニイ Computer graphics and live video systems to better visualize body structures during surgical procedures
US5356294A (en) * 1993-07-30 1994-10-18 Wataru Odomo Dental diagnostic and instructional apparatus
DE4341367C1 (en) * 1993-12-04 1995-06-14 Harald Dr Med Dr Med Eufinger Process for the production of endoprostheses
US5660176A (en) * 1993-12-29 1997-08-26 First Opinion Corporation Computerized medical diagnostic and treatment advice system
US5765561A (en) * 1994-10-07 1998-06-16 Medical Media Systems Video-based surgical targeting system
JP3411745B2 (en) * 1995-04-05 2003-06-03 株式会社日立メディコ Tomographic image interpolation method and apparatus
US5920660A (en) * 1995-04-05 1999-07-06 Hitachi Medical Corporation Tomogram interpolation method for executing interpolation calculation by using pixel values on projection line
US5742291A (en) * 1995-05-09 1998-04-21 Synthonics Incorporated Method and apparatus for creation of three-dimensional wire frames
US5737506A (en) * 1995-06-01 1998-04-07 Medical Media Systems Anatomical visualization system
US5608774A (en) * 1995-06-23 1997-03-04 Science Applications International Corporation Portable, digital X-ray apparatus for producing, storing, and displaying electronic radioscopic images
US5776050A (en) * 1995-07-24 1998-07-07 Medical Media Systems Anatomical visualization system
US5889524A (en) * 1995-09-11 1999-03-30 University Of Washington Reconstruction of three-dimensional objects using labeled piecewise smooth subdivision surfaces
US5769861A (en) * 1995-09-28 1998-06-23 Brainlab Med. Computersysteme Gmbh Method and devices for localizing an instrument
WO1997023164A1 (en) * 1995-12-21 1997-07-03 Siemens Corporate Research, Inc. Calibration system and method for x-ray geometry
US5926568A (en) * 1997-06-30 1999-07-20 The University Of North Carolina At Chapel Hill Image object matching using core analysis and deformable shape loci

Also Published As

Publication number Publication date
EP1027681A4 (en) 2001-09-19
WO1999059106A1 (en) 1999-11-18
CA2296274A1 (en) 1999-11-18
EP1027681A1 (en) 2000-08-16

Similar Documents

Publication Publication Date Title
AU4076999A (en) Method and apparatus for generating 3d models from medical images
US12064311B2 (en) Visual presentation of gingival line generated based on 3D tooth model
US11666416B2 (en) Methods for simulating orthodontic treatment
EP2134290B1 (en) Computer-assisted creation of a custom tooth set-up using facial analysis
US8731280B2 (en) Virtual cephalometric imaging
US8469705B2 (en) Method and system for integrated orthodontic treatment planning using unified workstation
US20090068617A1 (en) Method Of Designing Dental Devices Using Four-Dimensional Data
US20020010568A1 (en) Orthodontic treatment planning with user-specified simulation of tooth movement
US20240024076A1 (en) Combined face scanning and intraoral scanning
Patel et al. Surgical planning: 2D to 3D