Nothing Special   »   [go: up one dir, main page]

WO2004082453A2 - Assessment of lesions in an image - Google Patents

Assessment of lesions in an image Download PDF

Info

Publication number
WO2004082453A2
WO2004082453A2 PCT/DK2004/000188 DK2004000188W WO2004082453A2 WO 2004082453 A2 WO2004082453 A2 WO 2004082453A2 DK 2004000188 W DK2004000188 W DK 2004000188W WO 2004082453 A2 WO2004082453 A2 WO 2004082453A2
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
lesion
lesions
probability
Prior art date
Application number
PCT/DK2004/000188
Other languages
French (fr)
Other versions
WO2004082453A3 (en
Inventor
Niels Vaever Hartvig
Jean-Marc Ferran
Original Assignee
Retinalyze Danmark A/S
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Retinalyze Danmark A/S filed Critical Retinalyze Danmark A/S
Publication of WO2004082453A2 publication Critical patent/WO2004082453A2/en
Publication of WO2004082453A3 publication Critical patent/WO2004082453A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present invention relates to a method for assessing the presence or absence of lesion(s) in an image from an individiual and a system therefor, wherein said image may be any image potentially comprising lesions, in particular an image from medical image diagnostics, and more particularly an ocular fundus image.
  • the probability of the lesion(s) is corrected with information from said individual.
  • Fundus image analysis presents several challenges, such as high image variability, the need for reliable processing in the face of nonideal imaging conditions and short computation deadlines. Large variability is observed between different patients - even if healthy, with the situation worsening when pathologies exist. For the same patient, variability is observed under differing imaging conditions and during the course of a treatment or simply a long period of time. Besides, fundus images are often characterized by having a limited quality, being subject to improper illumination, glare, fadeout, loss of focus and artifacts arising from reflection, refraction, and dispersion.
  • vascular tree of fundus images is an important task in fundus image analysis for several reasons.
  • First of all the vascular tree is the most prominent feature of the retina, and it is present regardless of health condition. This makes the vascular tree an obvious basis for automated registration and montage synthesis algorithms.
  • the task of automatic and robust localization of the optic nerve head and fovea, as well as the task of automatic classification of veins and arteries in fundus images may very well rely on a proper extraction of the vascular tree.
  • Another example is the task of automatically detecting lesions which in many cases resemble the blood vessels. A properly extracted vessel tree may be a valuable tool in disqualifying false positive responses produced by such an algorithm, thus increasing its specificity.
  • the present invention relates to a method for detecting lesions in an image, wherein the detection process includes a wide range of information.
  • the detection process includes a wide range of information.
  • auxiliary information such as
  • the purpose of the method is to provide a framework where this type of covariate information may be included in the detection of individual lesions. This may allow the algorithm to analyse the images like the human grader does, by combining high- level information with low-level characteristics of the image, and may as such introduce a completely new paradigm in the lesion detection algorithm.
  • the method relates to image diagnostics in medicine, such as X-rays, scanning images, photos, magnetic nuclear radiation scanning, CT scannings, as well as other images potentially comprising lesions.
  • the method should be robust in the sense that it should be applicable to a wide variety of images independent of illumination, presence of symptoms of diseases and/or artefacts of the image.
  • Lesions may be any sign of disease or pathological condition that is detectable as local events in the image.
  • Lesions of the retina normally embrace microaneurysms and exudates, which show up on fundus images as generally "dot shaped" (i.e. substantially circular) areas. It is of interest to distinguish between such microaneurysms and exudates, and further to distinguish them from other lesions or pathologies in the image, such as "cotton wool spots" and hemorrhages.
  • the present invention relates to a method for assessing the presence or absence of lesion(s) in a fundus image from an individual, comprising
  • each subset is a candidate lesion area having a probability
  • step d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not
  • steps a) to d) optionally repeating steps a) to d) until all candidate lesion areas have been classified.
  • the method may include steps by which the lesions detected are corrected with respect to the background, in particular the local background in the vicinity of the lesion, in order to be able to detect lesions independent on the back- ground in the specific image, including variations of background in the images, for example due to varying illumination of the various parts of the image.
  • the invention relates to a system for carrying out the methods according to the invention, such as a system for assessing the presence or absence of lesion(s) in a fundus image of an individual, comprising
  • an algorithm for establishing information from said individual comprising at least one information type selected from the following: clinical information and structural information,
  • step d) an algorithm for classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not
  • Said system is capable of incorporating any of the variations of the methods described herein.
  • the invention relates to a method for diagnosing the presence or absence of a disease in an individual from a fundus image of at least one eye of said individual comprising
  • the invention relates to a method for assessing the probability of a diagnosis of diabetic retinopathy in an individual by using the method according to the invention, i.e. a method starting from a fundus image of at least one eye of said individual, and comprising
  • each subset is a candidate lesion area having a probability
  • step d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not
  • the image may then be classified depending of size and/or numbers and/or placement of lesions in the image, and accordingly, the invention relates to a method for classifying a fundus image comprising
  • Figure 1 Fundus image.
  • Figure 2 Accumulated presentation of lesions detected by expert graders in 199 fundus images.
  • Figure 3 Density presentation of the lesions in Figure 2.
  • Figure 4 Heat density plot of the lesions in Figure 2.
  • Figure 5 3D plot of the density presentation of Figure 3.
  • Figure 6 Designation of probability area in fundus.
  • Figure 7 A flowchart showing the Watershed procedure.
  • Figure 8 1-D example of pixel to process using Watershed algorithm with a toler- ance of 1.
  • Figure 9 The figure shows a partial image of the eye fundus, wherein a circle showing the 75 pixels radius background region of a lesion is arranged.
  • Figure 10 The background region in the gradient image.
  • Figure 11 Flow chart for normalization process.
  • Figure 12 Flow chart for growing process.
  • Figure 13 Schematic drawing showing the calculation of the visibility feature.
  • Figure 14 An example of a grown lesion and the band around it representing the background.
  • Figure 15 An example of overlapping regions.
  • the left panel displays the three regions which are grown from seed points located in the proliferation displayed in the right panel.
  • the band around the largest grown region is used as background for all of the three regions.
  • Figure 16 shows a histogram with the fitted gamma density of true bright lesions in a material of 400 fundus images
  • Figure 16B shows a histogram with the fitted gamma density of false bright lesions in the material of 400 fundus images.
  • Figure 17 shows the fitted visibility distributions of bright lesions, wherein the curve to the left represents false lesions, and the curve to the right represents true lesions.
  • Figure 18 shows a histogram with the fitted gamma density of true dark lesions in a material of 400 fundus images
  • Figure 18B shows a histogram with the fitted gamma density of false dark lesions in the material of 400 fundus images.
  • Figure 19 shows the fitted visibility distributions of dark lesions, wherein the curve to the left represents false lesions, and the curve to the right represents true lesions.
  • Figure 20 3D comparison of the distributions of Example 3 and Example 4.
  • Fovea The term is used in its normal anatomical meaning, i.e. the spot in retina having a great concentration of rods giving rise to the vision. Fovea and the term "macula lutea” are used as synonyms.
  • Image The term image is used to describe a representation of the region to be ex- amined, i.e. the term image includes 1 -dimensional representations, 2-dimensional representations, 3-dimensionals representations as well as n-dimensional representatives. Thus, the term image includes a volume of the region, a matrix of the region as well as an array of information of the region.
  • Lesion in fundus images Any pathology present in the fundus, such as microaneurysms, exudates, hemorrhages, cotton wool spot.
  • lesions refer to the dot-shaped lesions: micro-aneurysms and exudates.
  • Optic nerve head The term is used in its normal anatomical meaning, i.e. the area in the fundus of the eye where the optic nerve enters the retina. Synonyms for the area are, for example, the "blind" spot, the papilla, or the optic disc.
  • Prior probability The term is used in its normal meaning, i.e. the probability before combination with other information. Prior probability is synonymous with a priori probability.
  • Posterior probability The term is used in its normal meaning, i.e. the probability after combination with other information. Posterior probability is synonymous with a pos- teriori probability.
  • Red-green-blue image The term relates to the image having the red channel, the green channel and the blue channel, also called the RBG image.
  • ROI Region of interest.
  • Starting point The term describes a point or area for starting the search for a subset.
  • the term starting point is thus not limited to a mathematical point, such as not limited to a pixel, but merely denotes a localisation for starting search.
  • Visibility The term visibility is used in the normal meaning of the word, i.e. how visible a lesion or a structure of the fundus region is compared to background and other structures/lesions.
  • the images of the present invention may be any sort of images and presentations of the region of interest.
  • Fundus image is a conventional tool for examining retina and may be recorded on any suitable means.
  • the image is presented on a medium selected from dias, paper photos or digital photos.
  • the image may be any other kind of representation, such as a presentation on an array of elements, for example a CCD.
  • the image may be a grey-toned image or a colour image; in a preferred embodiment the image is a colour image.
  • the problem when evaluating the images is to evaluate whether a subset of the image representing a candidate lesion is true or false lesion, or in other words, to determine the probability that the candidate lesion is a true lesion.
  • the present invention relates to assigning a prior probability to each candidate lesion, and then correct the prior probability with other information as described herein, to obtain a posterior probability using the posterior probability to determine whether the candidate lesion is a true lesion or not.
  • the starting point for determining the probability of the candidate lesion to be a true lesion is to determine the visibility of the candidate lesion.
  • the term visibility is used in the normal meaning of the word, i.e. how visible a lesion or a structure of the fundus region is compared to background and other structures/lesions.
  • the visibility of an area may be determined as a vector of features, including intensity, visibility of the candidate lesion compared to the visibility of the vessels, visibility of the edge of the candidate lesion, colour information of the candidate lesion, variance measure of a part of the image and/or a variance measure of the image.
  • the visibility of the edge of the candidate lesion calculated as the orientated candidate lesion area edge gradient, in particular a weighted edge gradient.
  • the visibility feature may be based on a summation of orientation weighted region border gradient pixels.
  • N is the number of pixels in the outline of the lesion.
  • be the vector between the center of mass of the lesion and the image point at V - t is then the angle between ⁇ and orientation at the point.
  • the visibility feature may also be calculated from an image wherein the vessels have been "removed”. This is done by subtracting the vessel image (r,c).from the original image I(r,c) , and producing a vessel "restored” image, i.e. an image wherein interpolated background values have been introduced instead of vessels, by
  • the interpolated value may be produced by
  • h(x) exp(-x 2 12) .
  • the kernel width w(r,c) is set to the distance from pixel
  • a potential lesion located at x with visibility v is considered.
  • a single visibility threshold is applied, and A is estimated by
  • T the visibility threshold.
  • the visibility threshold may depend on the type of the lesion, but otherwise the threshold is the same for all lesions.
  • the probability of the candidate lesion is compared to a threshold value, and lesions having a probability above that threshold value are considered true lesions.
  • the idea of the present invention is thus to adapt the threshold to the specific lesion under consideration.
  • the threshold may for example be lowered if several clear lesions are present in the image, as it is then more likely, that the present lesion is also true, or a lower threshold may be used for patients with a long duration of diabetes, as these have a higher prevalence of diabetic retinopathy.
  • the probability of the candidate lesion area may be corrected with the information from said individual, and the corrected probability is compared with a predetermined probability threshold for lesions, or predetermined probability threshold is corrected with the information from said individual, whereaf- ter the probability of the candidate lesion area is compared with the corrected predetermined probability threshold.
  • variable A is a stochastic variable, with a certain distribution, known as the a priori di- stribution.
  • the posterior probability is thus a measure of how likely it is that the lesion is true, obtained by combining prior knowledge summarised by p(x) and the actual obser- vation in the image, summarised by v.
  • the posterior odds are obtained by multiplying the prior odds with the odds determined by the data.
  • the likelihood of the actual observation in the image, for example the visibility of a candidate lesion, to be a true lesion is combined with prior knowledge, in the present context called "information from the individual".
  • Said information may be any relevant information capable of adding information to the actual observation in the image relating to the candidate lesion.
  • said information is selected from the following: clinical information and structural informa- tion. Accordingly, another initial step is establishing information from the individual whose fundus image is studied.
  • ⁇ information may be included in the method, such as at least two different types of information, such as at least one structural type of infor- mation and at least one clinical type of information.
  • clinical information relates in the present context to any kind of clinical and demographic data of the individual under study since they may be believed to be predictors for the progression to (sight-threatening) diabetic retinopathy.
  • the clinical information is one or more clinical information types, for example clinical information types selected from information about age of said individual, information about sex of said individual, information about ethnicity of said individual, informa- tion about diseases of said individual, and information about at least one clinical test of said individual.
  • blood tests may be relevant, such as information about metabolic control variables, such as blood cholesterol level, blood glucose level, and HblAc level.
  • visual tests are of relevance, such as Visual Acuity test or an autorefraction test.
  • the invention may include information about the presence or absence of diseases in said individual.
  • diseases selected from diabetes, atherosclerosis, and hypertension, such as information about duration of disease, severity of disease, or simply a statement that the disease is present or absent.
  • Other information such as information about pregnancy, puberty or previous cataract surgery, may also be relevant.
  • the information or covariates for the individual may be collected in a vector te R d .
  • a common way of modelling the dependence of disease prevalence as function of explanatory variables is by logistic regression, where the following model is assumed,
  • the prior parameters ⁇ R and ⁇ e R d may be estimated in a study where all true lesions are marked in the images, and clinical information is available. Alternatively, previous epidemiological studies in the literature may provide sensible values for the parameters.
  • this prior only depends on variables relating to the patient and thus has the same value for all potential lesions in images from the specific visit. This is included to change the threshold from patient to patient, in order to reflect the fact that, for instance, it is more likely that a patient with a long duration of diabetes has retinal lesions, than a patient with a short duration.
  • structural information includes information of any structures of the fundus image being studied as well as information of any structures in fundus images previously being acquired preferably from said eye. Accordingly, the structural informa- tion may be selected from regional information, lesion information, and vessel information.
  • the structural information includes information of the position of the le- sion with respect to anatomical landmarks, such as the fovea, the optic nerve head and the vessels, information of other lesions in the image, and the information of lesions in other fundus images, for instance images of different fields of the same fundus, or from previous images of the same field.
  • anatomical landmarks such as the fovea, the optic nerve head and the vessels
  • information of other lesions in the image such as the fovea, the optic nerve head and the vessels
  • the information of lesions in other fundus images for instance images of different fields of the same fundus, or from previous images of the same field.
  • the lesion information is selected from information about number of lesions in another fundus of the same individual, information about previous le- sion(s) in the same and/or another fundus of said individual, information of other lesions in the same fundus image, information of a lesion in the same subset of another fundus image of the same fundus, and information about number of candidate lesion areas.
  • the information about previous lesion(s) in the same or another fundus of said individual preferably comprises information about optional previous lesion(s) in substantially the same image subset of the fundus.
  • the knowledge that previously at least one lesion was found in this individual may lead to a lowering of the threshold for the probability.
  • the information about other lesions and/or candidate lesion(s) in the same fundus image preferably comprises information about
  • the method includes information on other lesions in the classification as described in detail below.
  • a slightly more general notation is used for this, so from now on let x 0 denote the (position of) the potential lesion under consideration, and let x ,...,x k denote the neighbouring lesions to x 0 .
  • neighbours should be interpreted in a broad sense, as the neighbouring lesions may be located in other images from the same visit or even in images from another visit.
  • the prior probability of 1 ' s given by p(x 0 ) .
  • the prior pro- bability function p(») may for instance be modelled by any of the spatial priors to be discussed later herein.
  • the regional information may also be or include the information about the region of the fundus image comprising the at least one subset of the image. It has been found that more true lesions are found in some areas of the fundus as compared to other areas of the fundus. In particular the region around fovea, see Figures 2-6, has been shown to exhibit more true lesions than other regions.
  • the anatomical features are normally selected from fovea, optical nerve head and vessels, such as main arcades.
  • the information of the relation of the candidate lesions to one or more of the anatomical features may be included, such as the distance to the anatomical feature or simply information as to whether the candidate lesion is within a certain distance from the anatomical feature.
  • the structural information may include information that is a function of the distance to fovea for correcting the probability in step c).
  • the prior probability may be corrected with the spatial position of the lesion, in order to include the knowledge that lesions are more frequent close to the fovea than in peripheral regions.
  • the prior model is defined by
  • / is the point of the fovea.
  • the prior expresses the finding that it is R times more likely to detect a lesion in the circle of radius ⁇ around fovea, than outside this circle.
  • the radius may for instance be chosen as one disk diameter (DD) to reflect the distinction typically made in grading protocols between the severity of lesions within and outside 1 DD of fovea, see also Figure 6.
  • At least two different types of information are included in the method, more preferably at least three different types of information, such as least four different types of information, five different types of information, six different types of information, seven different types of information, eight different types of information, nine different types of information, or ten different types of information.
  • the method includes the estimation of a subset of the image, wherein the subset may represent a candidate lesion area.
  • the subset may represent a candidate lesion area.
  • the subset is a candidate lesion area.
  • the term subset is used in its normal meaning, i.e. one or more pixels.
  • the subset may be established by any suitable method, for example by filtering, by template matching, by establishing starting points, and from said starting points grow regions and/or by other methods search for candidate areas, and/or combinations thereof.
  • the candidate lesion area(s) are detected by establishing starting points, and from the starting points estimating the subset. Two or more subsets, each representing the same lesion may be detected, such as overlapping subsets or adjacent subsets.
  • the subset is a connected subset, i.e. all the pixels of the subset connects at least one of the other pixels, and it is possible to reach any of the pixels from any of the pixels by following pixels in the subset.
  • the estimation of the subset of the image comprises establishing of the periphery of the subset.
  • the periphery may be established for example by active contour model (snake) (reference "Snakes: Active contour models” by M. Kass, A. Witkin and D. Terzopoulos), by templating or by growing.
  • the subset may be established through establishing starting points.
  • Starting points may be established by a variety of suitable methods and of combinations of such methods.
  • the variability of fundus images is particularly relevant regarding image dynamics; the contrast may vary considerably from image to image and even from region to region in the same fundus image.
  • a proper starting point algorithm should recognize this circumstance and seek to adapt its sensitivity to the image at hand.
  • the image may be filtered and/or blurred before establishing or as a part of establishing starting points for the method. For example the low frequencies of the image may be removed before establishing starting points.
  • the image may be un- sharp filtered, for example by median or mean filtering the image and subtracting the filtered result from the image.
  • the starting points may be established as extrema of the image, such as local extrema.
  • the image is preferably a filtered image, wherein the filtering may be linear and/or non-linear.
  • the extrema may be minima or maxima or both.
  • the filtering method is a template matching method, wherein the template may exhibit any suitable geometry for identifying the lesions.
  • templates are circles, wherein the circles have a radius set as a ratio of the expect- ed diameter of the optic nerve head.
  • the image may be filtered with one or more filters before establishing starting points, or as a part of the step of establishing starting points.
  • starting points are estab- lished by combining two or more filters.
  • the extrema may thus be identified indidually by one or more of several methods, such as the following:
  • the lesions are normally either dark areas or light areas in the image, or at least locally the darkest areas or the lightest areas.
  • a method may be establishing at least one intensity extremum in the image, preferably at least one intensity minimum or at least one intensity maximum. Therefore, in a preferred embodiment at least one local intensity maximum is established.
  • the extrema may be established on any image function, such as wherein the image function is the unsharped image, the red channel image, the green channel image, or any combinations thereof. In a preferred embodiment the image function is the green channel.
  • the method may include establishing at least one variance extremum in the image, preferably establishing at least one variance maximum in the image.
  • the extrema may be established on any image function, such as wherein the image function is the unsharped image, the red channel image, the green channel image, or any combinations thereof. In a preferred embodiment the image function is the green channel.
  • Another method for establishing starting points may be random establishment of starting points, wherein the provocative random establishment is establishing a start- ing point in substantially each pixel of the image.
  • a random establishment may be combined with any of the methods discussed above.
  • the starting points may be established as grid points, such as evenly distributed or unevenly distributed grid points. Again this method may be combined with any of the methods of establishing extrema in the image and/or random establishment.
  • starting points are established by more than one of the methods described in order to increase the propability of assessing the correct localisation of lesions, if present, also with respect to images having less optimal illumination or presenting other forms of less optimal image quality.
  • the starting points are established by localising the local minima and/or maxima of the green channel image function, and let them act as starting points.
  • the subset is established by growing a subset from a starting point.
  • the growing of an object is used to segment an object from the background.
  • the method may be used to grow both dark and bright objects, the algo- rithm for the one simply being an inversion of the algorithm of the other.
  • the most essential part of the growing method is to limit the object with respect to the background. This limitation may be done in any way, for example by examining the visibility feature as described below, for a wide range of isocurves, or object depth, and then simply select the depth, which results in the highest possible visibility feature.
  • the establishment of subsets may be explained as growing q isocurves based on at least one growing feature of the area around the starting point, q being an integer of at least 1 , until the periphery of the candidate lesion area is established. That is for each starting point, a number of isocurves, wherein each isocurve may represent a candidate lesion area, is grown from the starting point.
  • the growing process may give rise to extraction of morn that one subset, the number of subsets for example corresponding to equally distant isocurves.
  • the smallest subset exceeds that of the starting point itself, and the area of the largest subset subceeds a predetermined value.
  • the isocurve being the iso- curve having the highest propability of being a candidate lesion area.
  • the propability may for example be the highest visibility as described below.
  • the subset of the image then implies the region contained by an isocurve resulting from the growing process, and the isocurve itself implies the periphery of the subset.
  • the growing algorithm is initialized in the starting point for the subset. Increasing the height in equidistant levels results in a number of grown isocurves.
  • the step depth may be arbitrarily set, but is normally for practical reasons chosen to 1 , as the pixel levels origins from byte images, which has discrete integer values.
  • the algorithm may continue for the whole image starting from each starting point.
  • it is appropriate to apply at least one limitation to the growing namely that candidate lesion areas above a certain predetermined area are not allowed.
  • another limitation may be applied either additionally or alone, namely that the candidate lesion is limited by a minimum and a maximum number of isocurves.
  • the predetermined value described above is preferably in the range of from 0.1 to 1.0, such as in the range of from 0.2 to 0.8, such as in the range of from 0.3 to 0.6.
  • the watershed algorithm was introduced for the purpose of segmentation by Lantuejoul and Beucher.
  • the idea of watershed is drawn by considering an image as a topographic surface.
  • the image intensity (the gray level) is considered as an altitude with this point of view.
  • a regional minimum is a connected plateau from which it is impossible to reach a point of lower gray level by an always-descending path.
  • As the image surface is immerged some of the flood areas (catchments ba- sins) will tend to merge. When two or more different flood areas are touched, infinitely tall dams (watershed lines) are constructed between them. When finished, the resulting networks of dams define the watershed of the image.
  • the watershed lines partitions the image into nonintersecting patches, called catchments basins. Since each patch contains only one regional minimum, the number of patches is equal to the number of the regional minima in the image. In a preferred embodiment the pixel, with minimum value and which is closest to the center of mass of the region, becomes the origin for the growing algorithm.
  • Steps 1 , 3, 5 and 8 Find minimum unprocessed pixel, and include neighbour pixel with the same value.
  • Borders are touching other regions so enlarge those.
  • the sensitivity of the watershed algorithm may be adjusted by modifying the tolerance level, which makes it possible to except basis with an insignificant depth.
  • the area may be filled, for example by simply performing a flood fill from the starting point to the peri- phery.
  • the subsets may be validated before being corrected with respect to the back- ground.
  • validation is meant that each subset is subjected to a validation step to determine whether the candidate area should classify as a candidate lesion area before assigning a prior probability.
  • the validation is preferably carried out by a feature different from the growing feature(s).
  • the validation step includes calculating the visibility of the candidate lesion area.
  • each subset is a candi- date lesion area having a visibility, and after having assigned visibility features to the candidate lesion area
  • the estimation of the subsets and estimation of the background variation is conducted in one step for each subset.
  • the background variation may be selected from the spatial and/or distributional properties of the original image, or any transformation of this, such as a gradiant image, a curvature image or a Laplace image.
  • the spatial properties may for example be based on a Fourier transformation, cooccurrence matrix, and fractale dimension, and the distributional properties may be moments such as mean, variance skeewness or kurtosis.
  • the lesions may for example be described by a visibility feature as discussed above, which is based on the orientation weighted lesion border gradient observations, and in this embodiment it has been shown to be an advantage to normalize the lesions visibility feature with a mean and standard deviation estimate of the background gradient.
  • the background variation is estimated by sequential identification of out-liers, for example by
  • the upper and lower threshold is determined as a constant multiplied with the standard deviation, for example as the standard deviation multiplied with at least 2, such as at least 3, such as at least 4, such as at least 5 or such as at least 6. It is preferred that at most one pixel is removed in each iteration step c2)
  • the area defined surrounding the candidate lesion area may include or exclude the candidate lesion area itself.
  • the gradient magnitude pixels in step d) include pixels from the candidate lesion area.
  • the area surrounding the candidate area is normally selected to be in the range of from 0.25 to 1.0 of the expected optic nerve head area, such as from 0.5 to 1.0 of the expected optic nerve head area, such as from 0.6 to 1.0 of the expected optic nerve head area. Normally such an area corresponds to a number of pixels in the range of from 100 to 100,000 pixels, such as in the range of from 400 to 64,000 pixels, such as in the range of from 1000 to 50,000 pixels, such as in the range of from 5,000 to 25,000 pixels.
  • the first step of the normalization is to estimate the background gradient of the lesion to correct. This estimation is done by an initial collection of the pixels within a given radius from the lesion center of mass. The amount of pixels is set in accor- dance with resolution of the image assessed. For most purposes the amount of pixels are set in a radius of from 50 to 100 pixels, vide for example Figure 9.
  • Crossing vessels and/or other lesions could influence a gradient estimate of the background, which calls for a robust estimation of the gradient background.
  • the background region in the gradient image is shown in Figure 10, from which the in- cluence of crossing vessels and/or other lesions is clear.
  • an outlier as a value deviating more than two standard deviations from the mean.
  • robust methods may be by filtering the image before collecting the intensities or by using robust estimators, such as the median instead of the mean and the mean absolute deviation instead of the standard deviation.
  • the steps of the methods may be conducted sequentially or in parallel for all sub- sets.
  • Some of the naturally occurring structures of the image may influence the assessment of lesions in a disadvantageous manner.
  • Such structures are for example vessels, and the optic nerve head of a fundus image, since these structures present dark/bright areas in the image. Therefore, some adjustment to the structure is preferred.
  • vascular system may be isolated from the rest of the image content.
  • One method for tracking vessels is a method wherein use is made of the fact that the vessels are linear in a local neighbourhood wherein different filter matrices have different orientations.
  • the localisation and orientation of such line elements may be determined using a template matching approach sometimes referred to as match filters).
  • a preferred method for tracking vessels is by tracking individual vessels from start- ing points representative for vessels, and iteratively grow the vessel network of the retina.
  • a preferred embodiment hereof is described in PCT patent application No. PCT/DK02/00662.
  • the esti- mation of starting points and/or estimation of subsets is adjusted with respect to vessels appearing in the image.
  • the estimation of candidate lesion areas is preceded by detection of vessels in the image.
  • adjustment of starting points means that starting points located in vessels are removed from the plurality of starting points representative for a lesion.
  • subsets of the image having at least a portion of said subset located in a vessel are rejected as a candidate lesion area.
  • Yet another method for adjusting with respect to the vessels is having detected the vessels of the image, the vessels appearing in the image are masked before establishing starting points.
  • the vessels may be masked by any suitable method, for example by masking a number of pixels along the vessel, such as a number in the range of from 1 to 10 pixels.
  • veins and arteries among the blood vessels. This can be important, for example in the diagnosis of venous beading and focal arteriolar narrow- ing.
  • the vascular system observed in the ocular fundus images is by nature a 2-dimensional projection of a 3-dimensional structure. It is quite difficult in principle to distinguish veins from arteries, solely by looking at isolated vessel segments. However, it has been discovered that effective separation can be achieved by making use of the fact that, individually, the artery structure and the vein vessel structures is each a perfect tree, (i. e., there is one unique path along the vessels from the heart to each capillary and back).
  • the artery and vein structures are each surface filling, so that all tissue is either supplied or drained by specific arteries or veins, respectively.
  • a method for distinguishing veins from arteries is described in WO 00/65982, which is based on the realisation that crossings of vessel segments are, for practical pur- poses, always between a vein and an artery (i.e. crossings between arteries and arteries or between veins and veins are, for practical purposes, non-existent).
  • optic nerve head Another structure capable of interfering with the assessment of lesions is the optic nerve head.
  • the optic nerve head As opposed to vessels, the optic nerve head is not necessarily present in all images, this depending on the region acquired by the camera or CCD.
  • the presence or absence of the optic nerve head area is assessed by a robust method before assessing the lesions.
  • Such a method is for example described in PCT patent application No. PCT/DK02/00663.
  • the estimation of starting points and/or estimation of subsets is adjusted with respect to the optic nerve head appearing in the image.
  • the estimation of candidate lesion areas is preceded by detection of the optic nerve head in the image.
  • adjustment of starting points means that starting points located in the optic nerve head are removed from the plurality of starting points representative for a lesion.
  • subsets of the image having at least a portion of said subset located in the optic nerve head are rejected as a candidate lesion area.
  • Yet another method for adjusting with respect to the optic nerve head is, when having detected the optic nerve head of the image, the optic nerve head appearing in the image is masked before establishing starting points.
  • the optic nerve head may be masked by any suitable method, for example by masking a number of pixels around the optic nerve head, such as a number corresponding to a constant multiplied with the diameter of the optic nerve head, optionally of an expected diameter of the optic nerve head, said constant being in the range of from 1.1 to 2.0, preferably about 1.5.
  • the method according to the present includes weighting the visibility in relation to local intensity variation around the lesion in order to reduce false positive lesions due to for example nerve fibre layers, untracked vessels and reflections in the vitreous body.
  • a common feature of these false positives is that the local intensity variation around the lesion is relatively large, contrary to the majority of true lesions, which are located in homogenous areas.
  • the background may be defined slightly differently, in order to avoid that the large lesion is interpreted as background when evaluating the smaller interior lesion.
  • the "foreground” may be the entire con- nected component in the les-image consisting of positive visibility pixels.
  • the background will be defined as above, but relative to this foreground-region. Thus, all lesions that are overlapping will have the same background region. The principle is illustrated in Figure 15.
  • regions grown around fovea and in large lesions may be misclassified as these usually have overlapping grown regions.
  • the local or immediate background may be defined as the band of pixels that are more than B. n and at most B in +B m ⁇ t pixels from the lesion.
  • the distance between a point and the lesion is defined as the smallest distance between the point and a pixel within the lesion.
  • B out is the width of the background-band around the lesion
  • B in is the width of the band separating the lesion and the background.
  • the ratio of the mean green channel intensity in the background and in the lesion may be used to discriminate true and false lesions. For example, a fixed threshold seems most appropriate, * green.out
  • IR thresh preferably is less than 1.1 such as between 1.01 and 1.09, preferably between 1.04 and 1.08 to discriminate a true lesion from a false lesion, a false le- sion having mean intensity ration below IR tresh .
  • ⁇ p . eenfiUt and / mil is the mean of the green channel in the immediate background and in the lesion, respectively.
  • v is the usual normalised visibility measure
  • ⁇ 2 P oi ,m respectively ⁇ 1 poi , out is the variance of the intensities in the poly-smoothed image inside the lesion and in the background, respectively.
  • the variance-weighted visibility measure is then compared with the predetermined visibility threshold as described above.
  • the predetermined visibility threshold it is the corrected visibility that is weighted as described above or the weighted visibility is compared with the corrected threshold.
  • the information regarding the lesion may be used for various purposes.
  • the present invention further relates to a method for diagnosing the presence or absence of a disease in an individual from a fundus image of at least one eye of said individual comprising
  • Diabetic retinopathy is a condition wherein the person has at least one of the following symptoms:
  • IRMA Intraretinal microvascular abnormalities
  • NBD new vessels on disk
  • NVE new vessels elsewhere
  • the invention may be used to quantify the probability of a person having diabetic retinopathy.
  • the presence or absence of true lesions has an impact on the probability of diabetic retinopathy.
  • the present invention also relates to a method for assessing the probability of a diagnosis of diabetic retinopathy in an individual from a fundus image of at least one eye of said individual, comprising
  • each subset is a candidate lesion area having a probability
  • information from said individual comprising at least one information type selected from the following: clinical information and structural information,
  • step d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not
  • the invention relates to a method for classifying a fundus image comprising
  • the invention further relates to a system for assessing the presence or absence of lesions in a fundus image.
  • the system according to the invention may be any system capable of conducting the method as described above as well as any combinations thereof within the scope of the invention. Accordingly, the system may include algorithms to perform any of the methods described above.
  • a graphical user interface module may operate in conjunction with a display screen of a display monitor.
  • the graphical user interface may be implemented as part of the processing system to receive input data and commands from a conventional keyboard and mouse through an interface and display results on a display monitor.
  • many components of a conventional computer system have not been discussed such as address buffers, memory buffers, and other standard control circuits because these elements are well known in the art and a de- tailed description thereof is not necessary for understanding the present invention.
  • Pre-acquired image data can be fed directly into the processing system through a network interface and stored locally on a mass storage device and/or in a memory. Furthermore, image data may also be supplied over a network, through a portable mass storage medium such as a removable hard disk, optical disks, tape drives, or any other type of data transfer and/or storage devices which are known in the art.
  • a parallel computer platform having multiple processors is also a suitable hardware platform for use with a system according to the present invention.
  • Such a configuration may include, but not be limited to, parallel machines and workstations with multiple processors.
  • the processing system can be a single computer, or several computers can be connected through a communications network to create a logical processing system.
  • the present system allows the grader, that is the person normally grading the im- ages, to identify the lesions more rapidly and securely. Also, the present system allows an automatic detection of lesions and other pathologies of the retina without interference from the vessels, again as an aiding tool for the traditional grader.
  • the present system it is also possible to arrange for recordation of the im- ages at one location and examining them at another location.
  • the images may be recorded by any optician or physician or elsewhere and be transported to the examining specialist, either as photos or the like or on digital media. Accordingly, by use of the present system the need for decentral centers for recording the image, while the maintaining fewer expert graders could be realised.
  • the network may carry data signals including control or image adjustment signals by which the expert examining the images at the examining unit directly controls the image acquisition occurring at the recordation localisation, i.e. the acquisition unit.
  • control or image adjustment signals by which the expert examining the images at the examining unit directly controls the image acquisition occurring at the recordation localisation, i.e. the acquisition unit.
  • command signals as zoom magnification, steering adjustments, and wavelength of field illumination may be selectively varied remotely to achieve desired imaging effect.
  • questionable tissue structures requiring greater magnification or a different perspective for their elucidation may be quickly resolved without ambiguity by varying such con- trol parameters.
  • by switching illumination wavelengths views may be selectively taken to represent different layers of tissue, or to accentuate imaging of the vasculature and blood flow characteristics.
  • control signals may include time varying signals to initiate stimulation with certain wavelengths of light, to initiate im- aging at certain times after stimulation or delivery of dye or drugs, or other such precisely controlled imaging protocols.
  • the digital data signals for these operations may be interfaced to the ophthalmic equipment in a relatively straightforward fashion, provided such equipment already has initiating switches or internal digital circuitry for controlling the particular parameters involved, or is capable of readily adapting electric controls to such control parameters as system focus, illumination and the like.
  • the imaging and ophthalmic treatment in- strumentation in this case will generally include a steering and stabilization system which maintains both instruments in alignment and stabilized on the structures appearing in the field of view.
  • the invention contemplates that the system control further includes image identification and correlation software which allows the ophthalmologist at site to identify particular positions in the retinal field of view, such as pinpointing particular vessels or tissue structures, and the image acquisition computer includes image recognition software which enables it to identify patterns in the video frames and correlate the identified position with each image frame as it is acquired at the acquisition site.
  • the image recognition software may lock onto a pattern of retinal vessels.
  • the invention further contemplates that the images provided by acquisition unit are processed for photogrammetric analysis of tissue features and optionally blood flow characteristics. This may be accomplished as follows. An image acquired at the recordation unit is sent to an examination unit, where it is displayed on the screen. As indicated schematically in the figure, such image may include a network of blood vessels having various diameters and lengths. These vessels include both arterial and venous capillaries constituting the blood supply and return network.
  • the workstation may be equipped with a photogrammetric measurement program which for example may enable the technician to place a cursor on an imaged vessel, and moving the cursor along the vessel while clicking, have the software automatically determine the width of the vessel and the subvessels to which it is connected, as well as the coordinates thereof.
  • a photogrammetric measurement program which for example may enable the technician to place a cursor on an imaged vessel, and moving the cursor along the vessel while clicking, have the software automatically determine the width of the vessel and the subvessels to which it is connected, as well as the coordinates thereof.
  • the software for noting coordinates from the pixel positions and linking displayed features in a record, as well as submodules which determine vessel capacities and the like, are straightforward and readily built up from photogrammetric program techniques.
  • Work station protocols may also be implemented to automatically map the vasculature as described above, or to compare two images taken at historically different times and identify or annotate the changes which have occurred, highlighting for the operator features such as vessel erosion, tissue which has changed colour, or other differences.
  • a user graphical interface allows the specialist to type in diagnostic indications linked to the image, or to a particular feature ap- pearing at a location in the image, so that the image or processed version of it becomes more useful.
  • a very precise and well-annotated medical record may be readily compiled and may be compared to a previously taken view for detailed evidence of changes over a period of time, or may be compared, for example, to immediately preceding angiographic views in order to assess the actual degree of blood flow occurring therein.
  • the measurement entries at examination unit become an annotated image record and are stored in the central library as part of the patient's record.
  • the present invention changes the dynamics of patient access to care, and the efficiency of delivery of ophthalmic expertise in a manner that solves an enormous current health care dilemma, namely, the obstacle to proper universal screening for diabetic retinopathy.
  • the invention further includes a data carrier, such as a CD-ROM, said data carrier including algorithms to perform any of the methods described above, where the data carrier is operably connected to a data system.
  • a data carrier such as a CD-ROM
  • the fitted distributions are based on the 400 images.
  • a first precondition is that the probabilities in the model must sum to one, and therefore q 0 is given by
  • This data material is used in this and the following examples to show models for estimating the lesions.
  • a circle of ⁇ DD-radius centred on the fovea is considered where the probability for a lesion being true is Ratio times greater than outside this circle (see Figure 6).
  • a probability of 0.0021 being a true lesion outside the circle. It corresponds to the number of true lesions divided by the number of seed- points in the 400 images. Then we consider Ratio times this probability inside the circle. Different values of this Ratio have been tested. Results
  • Pr /or Pr obability l + R exp 1 - exp p(type)
  • the first term of this model represents the concentration of true lesions around the fovea.
  • the second one represents the ONH.
  • this prior probability is close to (1+R)p(type) and close to 0 next to the ONH.
  • Example 3 The data material described in Example 3 was used.
  • the EDD is used instead of the DD for all the computations.
  • the results are as good and it is more robust to the ONH-detecti ⁇ n.
  • V ⁇ ior Y ⁇ obability [l + i. FuncFovea(Lesion, Fovea, EDD, ⁇ )] 1 - pitype)
  • NB g4-F means that a gaussian to the power 4 has been used around the fovea.
  • g2-ONH means that a usual gaussian has been used around the ONH in the given model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present invention relates to a method for assessing the presence or absence of lesion(s) in an image and a system therefor, wherein said image may be any image potentially comprising lesions, in particular an image from medical image diagnostics, and more particularly an ocular fundus image. The lesions are identified from starting points being candidate lesion areas. To each candidate lesion area is assigned a probability and said probability id corrected based on information from the individual.

Description

Assessment of lesions in an image
The present invention relates to a method for assessing the presence or absence of lesion(s) in an image from an individiual and a system therefor, wherein said image may be any image potentially comprising lesions, in particular an image from medical image diagnostics, and more particularly an ocular fundus image. The probability of the lesion(s) is corrected with information from said individual.
Background
Fundus image analysis presents several challenges, such as high image variability, the need for reliable processing in the face of nonideal imaging conditions and short computation deadlines. Large variability is observed between different patients - even if healthy, with the situation worsening when pathologies exist. For the same patient, variability is observed under differing imaging conditions and during the course of a treatment or simply a long period of time. Besides, fundus images are often characterized by having a limited quality, being subject to improper illumination, glare, fadeout, loss of focus and artifacts arising from reflection, refraction, and dispersion.
Automatic extraction and analyzation of the vascular tree of fundus images is an important task in fundus image analysis for several reasons. First of all the vascular tree is the most prominent feature of the retina, and it is present regardless of health condition. This makes the vascular tree an obvious basis for automated registration and montage synthesis algorithms. Besides, the task of automatic and robust localization of the optic nerve head and fovea, as well as the task of automatic classification of veins and arteries in fundus images may very well rely on a proper extraction of the vascular tree. Another example is the task of automatically detecting lesions which in many cases resemble the blood vessels. A properly extracted vessel tree may be a valuable tool in disqualifying false positive responses produced by such an algorithm, thus increasing its specificity. Finally the vessels often display various pathological manifestasions themselves, such as increased tortuosity, abnormal caliber changes and deproliferation. An automatic vessel tracking algorithm would be the obvious basis for analysis of these phenomena as well. Diabetes is the leading cause of blindness in working age adults. It is a disease that, among its many symptoms, includes a progressive impairment of the peripheral vascular system. These changes in the vasculature of the retina cause progressive vision impairment and eventually complete loss of sight. The tragedy of diabetic reti- nopathy is that in the vast majority of cases, blindness is preventable by early diagnosis and treatment, but screening programs that could provide early detection are not widespread.
Promising techniques for early detection of diabetic retinopathy presently exist. Re- searchers have found that retinopathy is preceded by visibly detectable changes in blood flow through the retina. Diagnostic techniques now exist that grade and classify diabetic retinopathy, and together with a series of retinal images taken at different times, these provide a methodology for the early detection of degeneration. Various medical, surgical and dietary interventions may then prevent the disease from progressing to blindness.
Despite the existing techniques for preventing diabetic blindness, only a small fraction of the afflicted population receives timely and proper care, and significant barriers separate most patients from state-of-the art diabetes eye care. There are a lim- ited number of ophthalmologists trained to evaluate retinopathy, and most are located in population centers. Many patients cannot afford the costs or the time for travel to a specialist. Additionally, cultural and language barriers often prevent elderly, rural and ethnic minority patients from seeking proper care. Moreover, because diabetes is a persistent disease and diabetic retinopathy is a degenerative disease, an afflicted patient requires lifelong disease management, including periodic examinations to monitor and record the condition of the retina, and sustained attention on the part of the patient to medical or behavioral guidelines. Such a sustained level of personal responsibility requires a high degree of motivation, and lifelong disease management can be a significant lifestyle burden. These factors increase the likeli- hood that the patient will, at least at some point, fail to receive proper disease management, often with catastrophic consequences.
Accordingly, it would be desirable to implement more widespread screening for retinal degeneration or pathology, and to positively address the financial, social and cultural barriers to implementation of such screening. It would also be desirable to improve the efficiency and quality of retinal evaluation.
Hence, a precise knowledge of both localisation and orientations of the strucutures of the fundus is important, including the localisation of the vessels. Currently, examination of fundus images is carried out principally by a clinician examining each image "manually". This is not only very time-consuming, since even an experienced clinician can take several minutes to assess a single image, but is also prone to error since there can be inconsistencies between the way in which different clinicians assess a given image. It is therefore desirable to provide ways of automating the process of the analysis of fundus images, using computerised image analysis, so as to provide at least preliminary screening information and also as an aid to diagnosis to assist the clinician in the analysis of difficult cases.
Next, it is generally desirable to provide a method of accurately determining, using computerised image analysis techniques, the position of both the papilla (the point of exit of the optic nerve) and the fovea (the region at the centre of the retina, where the retina is most sensitive to light), as well as vessels of the fundus.
Summary of the invention
In order to be able to make automatic detection of various structures in fundus images a reliable method of detecting lesions in fundus images actually containing the lesions, and reliably not detecting lesions in other images not comprising lesions, is necessary. Current methods may be able to detect the lesions in many images, but the methods are not reliable when applied to images not containing lesions.
Accordingly, the present invention relates to a method for detecting lesions in an image, wherein the detection process includes a wide range of information. Clearly important information lies in characteristics connected to the lesion itself, such as visibility, size, colour and so forth, but in the present context this is combined with and thereby supported by auxiliary information such as
1. Information of the position of the lesion with respect to main anatomical land- marks, such as the fovea, the optic nerve head and the vessels. 2. Clinical and demographic data of the patient under study, such as disease history.
3. Information on other lesions in the image, for instance characteristics of lesions in the vicinity of the potential lesion under consideration. 4. Information on lesions in other fundus images, for instance images of different fields or images acquired at an earlier visit.
The purpose of the method is to provide a framework where this type of covariate information may be included in the detection of individual lesions. This may allow the algorithm to analyse the images like the human grader does, by combining high- level information with low-level characteristics of the image, and may as such introduce a completely new paradigm in the lesion detection algorithm.
In particular the method relates to image diagnostics in medicine, such as X-rays, scanning images, photos, magnetic nuclear radiation scanning, CT scannings, as well as other images potentially comprising lesions.
Furthermore, the method should be robust in the sense that it should be applicable to a wide variety of images independent of illumination, presence of symptoms of diseases and/or artefacts of the image. Lesions may be any sign of disease or pathological condition that is detectable as local events in the image. Lesions of the retina normally embrace microaneurysms and exudates, which show up on fundus images as generally "dot shaped" (i.e. substantially circular) areas. It is of interest to distinguish between such microaneurysms and exudates, and further to distinguish them from other lesions or pathologies in the image, such as "cotton wool spots" and hemorrhages.
In one aspect, the present invention relates to a method for assessing the presence or absence of lesion(s) in a fundus image from an individual, comprising
a) estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability, b) establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) correcting the probability of the candidate lesion area with the information from said individual, comparing the corrected probability with a predetermined probability threshold for lesions, or correcting a predetermined probability threshold with the information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined probability threshold,
d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) optionally repeating steps a) to d) until all candidate lesion areas have been classified.
Furthermore, the method may include steps by which the lesions detected are corrected with respect to the background, in particular the local background in the vicinity of the lesion, in order to be able to detect lesions independent on the back- ground in the specific image, including variations of background in the images, for example due to varying illumination of the various parts of the image.
Furthermore, the invention relates to a system for carrying out the methods according to the invention, such as a system for assessing the presence or absence of lesion(s) in a fundus image of an individual, comprising
a) an algorithm for estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability,
b) an algorithm for establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) an algorithm for correcting the probability of the candidate lesion area with the information from said individual, comparing the corrected probability with a pre- determined probability threshold for lesions, or correcting a predetermined probability threshold with the information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined probability threshold,
d) an algorithm for classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) an algorithm for optionally repeating steps a) to d) until all candidate lesion areas have been classified.
Said system is capable of incorporating any of the variations of the methods described herein.
Also, the invention relates to a method for diagnosing the presence or absence of a disease in an individual from a fundus image of at least one eye of said individual comprising
- assessing the presence or absence of at least one lesion by the method as de- fined above,
- grading the fundus image with respect to number and/or size and/or placement of lesions,
- diagnosing the presence or absence of the disease.
Furthermore, the invention relates to a method for assessing the probability of a diagnosis of diabetic retinopathy in an individual by using the method according to the invention, i.e. a method starting from a fundus image of at least one eye of said individual, and comprising
a) estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability, b) establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) correcting the probability of the candidate lesion area with the structural information from said individual, comparing the corrected probability with a predetermined probability threshold for lesions, or correcting a predetermined probability threshold with the structural information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined prob- ability threshold, only correcting with structural information,
d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) optionally repeating steps a) to d) until all candidate lesion areas have been classified,
obtaining a diagnosis of diabetic retinopathy having an initial probability, correcting the initial probability of the diagnosis with the clinical information, thereby obtaining the probability of the diagnosis diabetic retinopathy.
The image may then be classified depending of size and/or numbers and/or placement of lesions in the image, and accordingly, the invention relates to a method for classifying a fundus image comprising
- assessing the presence or absence of at least one lesion by the method as defined above,
- grading the fundus image with respect to number and/or size and/or placement of lesions,
- classifying the fundus image into at least two classes.
All the methods described herein are preferably for use in an automatic method, for example as included in a computer readable program. Drawings
Figure 1 : Fundus image.
Figure 2: Accumulated presentation of lesions detected by expert graders in 199 fundus images.
Figure 3: Density presentation of the lesions in Figure 2.
Figure 4: Heat density plot of the lesions in Figure 2.
Figure 5: 3D plot of the density presentation of Figure 3.
Figure 6: Designation of probability area in fundus.
Figure 7: A flowchart showing the Watershed procedure.
Figure 8: 1-D example of pixel to process using Watershed algorithm with a toler- ance of 1.
Figure 9: The figure shows a partial image of the eye fundus, wherein a circle showing the 75 pixels radius background region of a lesion is arranged.
Figure 10: The background region in the gradient image.
Figure 11 : Flow chart for normalization process.
Figure 12: Flow chart for growing process.
Figure 13: Schematic drawing showing the calculation of the visibility feature.
Figure 14: An example of a grown lesion and the band around it representing the background. Figure 15: An example of overlapping regions. The left panel displays the three regions which are grown from seed points located in the proliferation displayed in the right panel. The band around the largest grown region is used as background for all of the three regions.
Figure 16: Figure 16A shows a histogram with the fitted gamma density of true bright lesions in a material of 400 fundus images, and Figure 16B shows a histogram with the fitted gamma density of false bright lesions in the material of 400 fundus images.
Figure 17: shows the fitted visibility distributions of bright lesions, wherein the curve to the left represents false lesions, and the curve to the right represents true lesions.
Figure 18: Figure 18A shows a histogram with the fitted gamma density of true dark lesions in a material of 400 fundus images, and Figure 18B shows a histogram with the fitted gamma density of false dark lesions in the material of 400 fundus images.
Figure 19: shows the fitted visibility distributions of dark lesions, wherein the curve to the left represents false lesions, and the curve to the right represents true lesions.
Figure 20: 3D comparison of the distributions of Example 3 and Example 4.
Definitions
Fovea: The term is used in its normal anatomical meaning, i.e. the spot in retina having a great concentration of rods giving rise to the vision. Fovea and the term "macula lutea" are used as synonyms.
Image: The term image is used to describe a representation of the region to be ex- amined, i.e. the term image includes 1 -dimensional representations, 2-dimensional representations, 3-dimensionals representations as well as n-dimensional representatives. Thus, the term image includes a volume of the region, a matrix of the region as well as an array of information of the region. Lesion in fundus images: Any pathology present in the fundus, such as microaneurysms, exudates, hemorrhages, cotton wool spot. Preferably, lesions refer to the dot-shaped lesions: micro-aneurysms and exudates.
Optic nerve head: The term is used in its normal anatomical meaning, i.e. the area in the fundus of the eye where the optic nerve enters the retina. Synonyms for the area are, for example, the "blind" spot, the papilla, or the optic disc.
Prior probability: The term is used in its normal meaning, i.e. the probability before combination with other information. Prior probability is synonymous with a priori probability.
Posterior probability: The term is used in its normal meaning, i.e. the probability after combination with other information. Posterior probability is synonymous with a pos- teriori probability.
Red-green-blue image: The term relates to the image having the red channel, the green channel and the blue channel, also called the RBG image.
ROI: Region of interest.
Starting point: The term describes a point or area for starting the search for a subset. The term starting point is thus not limited to a mathematical point, such as not limited to a pixel, but merely denotes a localisation for starting search.
Visibility: The term visibility is used in the normal meaning of the word, i.e. how visible a lesion or a structure of the fundus region is compared to background and other structures/lesions.
Detailed description of the invention
Images
The images of the present invention may be any sort of images and presentations of the region of interest. Fundus image is a conventional tool for examining retina and may be recorded on any suitable means. In one embodiment the image is presented on a medium selected from dias, paper photos or digital photos. However, the image may be any other kind of representation, such as a presentation on an array of elements, for example a CCD.
The image may be a grey-toned image or a colour image; in a preferred embodiment the image is a colour image.
Probability
The problem when evaluating the images is to evaluate whether a subset of the image representing a candidate lesion is true or false lesion, or in other words, to determine the probability that the candidate lesion is a true lesion. Thus, the present invention relates to assigning a prior probability to each candidate lesion, and then correct the prior probability with other information as described herein, to obtain a posterior probability using the posterior probability to determine whether the candidate lesion is a true lesion or not.
In one embodiment the starting point for determining the probability of the candidate lesion to be a true lesion is to determine the visibility of the candidate lesion.
The term visibility is used in the normal meaning of the word, i.e. how visible a lesion or a structure of the fundus region is compared to background and other structures/lesions. The visibility of an area may be determined as a vector of features, including intensity, visibility of the candidate lesion compared to the visibility of the vessels, visibility of the edge of the candidate lesion, colour information of the candidate lesion, variance measure of a part of the image and/or a variance measure of the image. In a preferred embodiment the visibility of the edge of the candidate lesion calculated as the orientated candidate lesion area edge gradient, in particular a weighted edge gradient.
Thus, the visibility feature may be based on a summation of orientation weighted region border gradient pixels. In particular the gradient pixels should be weighted according to their orientation, α, towards the grown region for example by applying the following formula to the candidate lesion area: 1 N Visibility =— -Vy. -'
I .
Figure imgf000013_0001
where N is the number of pixels in the outline of the lesion. Let η be the vector between the center of mass of the lesion and the image point at V -t is then the angle between η and orientation at the point.
Examples of the vectors are shown in Figure 13.
The visibility feature may also be calculated from an image wherein the vessels have been "removed". This is done by subtracting the vessel image (r,c).from the original image I(r,c) , and producing a vessel "restored" image, i.e. an image wherein interpolated background values have been introduced instead of vessels, by
Figure imgf000013_0002
The interpolated value may be produced by
' I(r c')(l - V(r J))h( \ (r,c) - (rV) || /w)
B(r,c,w) = - c')
∑(l - V(r c<))h(\\ (r,c) - (r<,c<) \\ /w)
where h(x) = exp(-x2 12) . The kernel width w(r,c) is set to the distance from pixel
(r,c) to the closest background pixel. By the same principle, other features in the image may be "removed". This "removal" is conducted to avoid false positive lesions in proximity to features such as vessels.
To illustrate the basic framework of the present invention a potential lesion located at x with visibility v is considered. The aim is to classify the lesion as true or false, and the classification problem may be formulated in terms of predicting the value of an unobserved variable A , where --4 = 0 if the lesion is false, and A = 1 if the lesion is true. In a simple embodiment without a prior model, a single visibility threshold is applied, and A is estimated by
• -4 = 1 ifv ≥ -T and • -4 = 0 ifv < r , where T is the visibility threshold. The visibility threshold may depend on the type of the lesion, but otherwise the threshold is the same for all lesions.
The probability of the candidate lesion is compared to a threshold value, and lesions having a probability above that threshold value are considered true lesions. The idea of the present invention is thus to adapt the threshold to the specific lesion under consideration. According to the invention the threshold may for example be lowered if several clear lesions are present in the image, as it is then more likely, that the present lesion is also true, or a lower threshold may be used for patients with a long duration of diabetes, as these have a higher prevalence of diabetic retinopathy.
Thus, as described in the following the probability of the candidate lesion area may be corrected with the information from said individual, and the corrected probability is compared with a predetermined probability threshold for lesions, or predetermined probability threshold is corrected with the information from said individual, whereaf- ter the probability of the candidate lesion area is compared with the corrected predetermined probability threshold.
This may be formulated in a Bayesian framework, where it is assumed that the variable A is a stochastic variable, with a certain distribution, known as the a priori di- stribution. Thus, P(A = 1) = p(x) , where the probability may depend on covariates related to the lesion x . This may for instance be the anatomical position of the lesion, or clinical covariate information.
The visibility of the lesion v will depend on the status of the lesion, i.e. on the value of A . If the lesion is true, the visibility will in general be higher than if it is false, but in practice an overlapping range of different visibility values is observed for true and false lesions. If for example f(v \ A = a) , or briefly f(v \ a), denote the density of this visibility distribution given that the lesion has the state α,then
• f v 10) is tne density of the visibility distribution for false lesions, and • f(v 11) is the corresponding density for true lesions.
See Figures 16, 17, 18 and 19 for examples of these distributions, shown on a material of 400 fundus images.
The posterior probability that a lesion is true, conditional on the observed visibility is obtained by Bayes' formula:
^ = l | ) = ^ = l)/(v| l)
P(-4 = l)/(v | l) + P(-4 = 0)/(v| 0)
■P(*)/(v l l)
P(x)f(v \ l) + (1-P(x))f(v \ 0y
The posterior probability is thus a measure of how likely it is that the lesion is true, obtained by combining prior knowledge summarised by p(x) and the actual obser- vation in the image, summarised by v. One may interpret the measure as a weighting of odds, by rewriting the expression in terms of odds in favour of the lesion being true,
Figure imgf000015_0001
In other words the posterior odds are obtained by multiplying the prior odds with the odds determined by the data.
The final classification of the lesion as a true lesion or not may be obtained by thresholding the posterior probability at a fixed value Pτ , in the sense that we estimate A by • A = 1 if P(A = 11 v) > Pτ and
-4 = 0 if P(-4 = l | v) < Pr .
These steps are preferably repeated until all candidate lesion areas have been classified.
Information
As described above, the likelihood of the actual observation in the image, for example the visibility of a candidate lesion, to be a true lesion is combined with prior knowledge, in the present context called "information from the individual". Said information may be any relevant information capable of adding information to the actual observation in the image relating to the candidate lesion. In particular said information is selected from the following: clinical information and structural informa- tion. Accordingly, another initial step is establishing information from the individual whose fundus image is studied.
Several different types of information may be included in the method, such as at least two different types of information, such as at least one structural type of infor- mation and at least one clinical type of information.
Clinical information
The term "clinical information" relates in the present context to any kind of clinical and demographic data of the individual under study since they may be believed to be predictors for the progression to (sight-threatening) diabetic retinopathy. The clinical information is one or more clinical information types, for example clinical information types selected from information about age of said individual, information about sex of said individual, information about ethnicity of said individual, informa- tion about diseases of said individual, and information about at least one clinical test of said individual.
Many clinical tests may be relevant for being combined with the actual information from the image, in particular clinical tests having a direct or indirect relation to the either the eyes or diseases being reflected in the eyes. Accordingly, for example blood tests may be relevant, such as information about metabolic control variables, such as blood cholesterol level, blood glucose level, and HblAc level.
Also, visual tests are of relevance, such as Visual Acuity test or an autorefraction test.
Furthermore, the invention may include information about the presence or absence of diseases in said individual. In particular information about diseases selected from diabetes, atherosclerosis, and hypertension, such as information about duration of disease, severity of disease, or simply a statement that the disease is present or absent. Other information, such as information about pregnancy, puberty or previous cataract surgery, may also be relevant.
The information or covariates for the individual may be collected in a vector te Rd . A common way of modelling the dependence of disease prevalence as function of explanatory variables is by logistic regression, where the following model is assumed,
Figure imgf000017_0001
The prior parameters αε R and βe Rdmay be estimated in a study where all true lesions are marked in the images, and clinical information is available. Alternatively, previous epidemiological studies in the literature may provide sensible values for the parameters.
Accordingly, a prior based on covariate information may be given by p3(t) = - + & (~a-β t))~l
Note that this prior only depends on variables relating to the patient and thus has the same value for all potential lesions in images from the specific visit. This is included to change the threshold from patient to patient, in order to reflect the fact that, for instance, it is more likely that a patient with a long duration of diabetes has retinal lesions, than a patient with a short duration.
It may be combined with the structural information described herein to yield an overall prior reflecting both variables relating to the patient and the anatomical position of a potential lesion, p(x, t) = px (x)p2 (x)p3 (t). to be defined later herein.
Structural information
The term "structural information" includes information of any structures of the fundus image being studied as well as information of any structures in fundus images previously being acquired preferably from said eye. Accordingly, the structural informa- tion may be selected from regional information, lesion information, and vessel information.
In particular the structural information includes information of the position of the le- sion with respect to anatomical landmarks, such as the fovea, the optic nerve head and the vessels, information of other lesions in the image, and the information of lesions in other fundus images, for instance images of different fields of the same fundus, or from previous images of the same field.
In one embodiment the lesion information is selected from information about number of lesions in another fundus of the same individual, information about previous le- sion(s) in the same and/or another fundus of said individual, information of other lesions in the same fundus image, information of a lesion in the same subset of another fundus image of the same fundus, and information about number of candidate lesion areas.
The information about previous lesion(s) in the same or another fundus of said individual preferably comprises information about optional previous lesion(s) in substantially the same image subset of the fundus. The knowledge that previously at least one lesion was found in this individual may lead to a lowering of the threshold for the probability.
The information about other lesions and/or candidate lesion(s) in the same fundus image preferably comprises information about
- distance to at least one other lesion and/or candidate lesion(s) in the same fundus image and/or
- numbers of other lesions and/or candidate lesion(s) in the same fundus image, such as numbers of other lesions and/or candidate lesion(s) within a predetermined region of the same fundus image, and/or
- other lesions and/or candidate lesion(s) within a predetermined distance from said subset and/or - the size of at least one other lesion and/or candidate lesion(s), and/or
- other lesions and/or candidate lesion(s) in a predetermined distance from fovea in the image and/or
- type of lesion(s), such as light and dark lesions, and/or
- the probability of at least one other lesion.
Thus, in one embodiment the method includes information on other lesions in the classification as described in detail below. A slightly more general notation is used for this, so from now on let x0 denote the (position of) the potential lesion under consideration, and let x ,...,xk denote the neighbouring lesions to x0. Here neighbours should be interpreted in a broad sense, as the neighbouring lesions may be located in other images from the same visit or even in images from another visit. Correspondingly, let A = (Aϋ,A1,...,Ak)e {0.1}λ~+1 denote the unknown indicator variables that defines the true state of the lesions, v = (v0.v1,...,v/.) denote the visibilities of the lesions, and τ = (τϋl,...,τk) the type of the lesions (0=dark, 1=bright). The variables are listed for quick reference in Table 1.
Table 1. Variables used in the interaction model.
Variable Interpretation (/c+l)
X — ( XQ , V . ,x,)e R 2x The (position of) the lesion under consideration (x0) and its neighbours (x1,...,xk ).
Known v = (v0,v1,...,vi)e RA+1 The visibilities of the lesions
τ = (τQ1,... ,τk)e {0,l}k+l The type of the lesions (0=dark, 1 =bright)
Unknown Λ - (Aϋ,A1,...,Ak)e {0,1}/C+1 Indicator variables for the state of the lesions, (0=false, 1=true) The prior is formulated in terms of the joint distribution of the vector A . One possibility is the following form q0 ifaι = 0 for all i,
P(A = a) -- * λa> for ae {0,1}* . (4) p(x0)Λ H else,
With this expression, the prior probability of = 1 's given by p(x0) . The prior pro- bability function p(») may for instance be modelled by any of the spatial priors to be discussed later herein.
Note that the parameter q0 is determined by the fact that the probabilities must sum to one (an expression for q0 is given in Example 2). The parameters of the model are thus AQl,...,λk . The interpretation of these is the following:
• The parameter controls the interaction between A^ and Al for any ie {!,..., k}. This may be seen the fact that
p(A =il4 =i) =r^r- 1+Λ
If Λt) /(l+ ) > p(x0) there is a positive correlation between Al and ^ 'm the sense that if we know that At = l , there is greater probability that Aϋ = l compared to the unconditional probability. The correlation increases with the value of/io .If, on the other hand /(l + ) = p(x0) the variables are uncor- related and there is hence no spatial interaction. This corresponds to the simple models described in the previous section. • For a given value of , the parameter λt for ie {l,...,k} determines the prior probability of -4; = 1. This may be seen by the expression
P(4 = 1) = i)WI±i_A_
It may be natural to require that P(_4; = 1) = p(xt) , according to the chosen spatial prior p(»), in which case λl is fixed, and determined by the above ex- pression.
The posterior odds for 4, = 1 may be expressed in closed form;
Figure imgf000021_0001
(5) The derivation is given in Example 2. Note that the correlation parameter determines the weight which is put on the second term in the brackets that contains the visibilities of the neighbouring lesions. If the independent model is used where /(l + ) = p(x0) , the second term vanishes and the posterior odds reduce to those of the simple model (1) without combination with other lesions. On the other hand, the higher the value of , the more weight is put on the second term, i.e. the more influence do the neighbouring lesions have on the centre lesion.
The role of λt,ie {1,..., A:} in the model is also illustrated in (5): The smaller , is, the smaller influence does v, have on the posterior probability. For the sake of illustration, if λt — 0 , there is no interaction between Ai and A^ . Besides the relation between Z, and P(At =1) given earlier, L may be thought of as a weight parameter that controls the influence of -4.°n A - From this point of view, it is natural to let λt decrease with the distance between x. and x0 , for instance by the expression
j = , exp(-di I a), (6)
where ,.is the distance between x. and x0. Likewise, it is also natural to distinguish between the interaction between lesions of the same type, and lesions of different type, for instance =Λw (7) where w00, w0l, w]0, and wnare weights controlling the interaction between lesions of the different types.
The regional information may also be or include the information about the region of the fundus image comprising the at least one subset of the image. It has been found that more true lesions are found in some areas of the fundus as compared to other areas of the fundus. In particular the region around fovea, see Figures 2-6, has been shown to exhibit more true lesions than other regions.
The anatomical features are normally selected from fovea, optical nerve head and vessels, such as main arcades. The information of the relation of the candidate lesions to one or more of the anatomical features may be included, such as the distance to the anatomical feature or simply information as to whether the candidate lesion is within a certain distance from the anatomical feature. For example the structural information may include information that is a function of the distance to fovea for correcting the probability in step c).
Accordingly, the prior probability may be corrected with the spatial position of the lesion, in order to include the knowledge that lesions are more frequent close to the fovea than in peripheral regions.
As a simple example, the prior model is defined by
Figure imgf000022_0001
where / is the point of the fovea. The prior expresses the finding that it is R times more likely to detect a lesion in the circle of radius μ around fovea, than outside this circle. The radius may for instance be chosen as one disk diameter (DD) to reflect the distinction typically made in grading protocols between the severity of lesions within and outside 1 DD of fovea, see also Figure 6.
In this situation the posterior odds that the lesion is true are thus
Figure imgf000022_0002
Hence when p and pR are small, the posterior odds for a lesion being true are approximately multiplied by R near fovea. In one embodiment, potential lesions closer than 1.5 DD to the ONH are considered false lesions. This rule may be formulated in the Bayesian framework by using the prior
Figure imgf000023_0001
where o is the centre of the ONH and μ = 1.5DD . The two priors p and p2 may be multiplied to form a single spatial prior, which encompasses both the fovea and the ONH, p x) = Pι(.x)p2(χ)- The priors may be formulated in terms of a general kernel k(x) as Pl(x) = pk(\\ x-f \\ /μ) and p2(x) =l-k(\\ x-o \\ /η).
Here the previous expressions are obtained by choosing the kernel k(x) = l(x < 1) , but to obtain a continuous prior, a Gaussian kernel k(x) = exp(-x2 / 2) may for instance be used instead, see Figure 20.
Preferably, at least two different types of information are included in the method, more preferably at least three different types of information, such as least four different types of information, five different types of information, six different types of information, seven different types of information, eight different types of information, nine different types of information, or ten different types of information.
Subsets
In one of the first steps, the method includes the estimation of a subset of the image, wherein the subset may represent a candidate lesion area. Thus, at least one sub- set is established in the image, wherein the subset is a candidate lesion area. The term subset is used in its normal meaning, i.e. one or more pixels.
The subset may be established by any suitable method, for example by filtering, by template matching, by establishing starting points, and from said starting points grow regions and/or by other methods search for candidate areas, and/or combinations thereof. In one embodiment, the candidate lesion area(s) are detected by establishing starting points, and from the starting points estimating the subset. Two or more subsets, each representing the same lesion may be detected, such as overlapping subsets or adjacent subsets.
In a preferred embodiment the subset is a connected subset, i.e. all the pixels of the subset connects at least one of the other pixels, and it is possible to reach any of the pixels from any of the pixels by following pixels in the subset. In yet a preferred embodiment the estimation of the subset of the image comprises establishing of the periphery of the subset. The periphery may be established for example by active contour model (snake) (reference "Snakes: Active contour models" by M. Kass, A. Witkin and D. Terzopoulos), by templating or by growing.
Establishing starting points
The subset may be established through establishing starting points. Starting points may be established by a variety of suitable methods and of combinations of such methods. The variability of fundus images is particularly relevant regarding image dynamics; the contrast may vary considerably from image to image and even from region to region in the same fundus image. A proper starting point algorithm should recognize this circumstance and seek to adapt its sensitivity to the image at hand. The image may be filtered and/or blurred before establishing or as a part of establishing starting points for the method. For example the low frequencies of the image may be removed before establishing starting points. Also, the image may be un- sharp filtered, for example by median or mean filtering the image and subtracting the filtered result from the image.
Independent of whether the image is filtered or not the starting points may be established as extrema of the image, such as local extrema. However, the image is preferably a filtered image, wherein the filtering may be linear and/or non-linear. Depending on the type of lesions assessed, the extrema may be minima or maxima or both.
In one embodiment the filtering method is a template matching method, wherein the template may exhibit any suitable geometry for identifying the lesions. Examples of templates are circles, wherein the circles have a radius set as a ratio of the expect- ed diameter of the optic nerve head. It is within the scope of the invention that the image may be filtered with one or more filters before establishing starting points, or as a part of the step of establishing starting points. Thus, in one embodiment of the invention starting points are estab- lished by combining two or more filters.
The extrema may thus be identified indidually by one or more of several methods, such as the following:
The lesions are normally either dark areas or light areas in the image, or at least locally the darkest areas or the lightest areas. Thus, a method may be establishing at least one intensity extremum in the image, preferably at least one intensity minimum or at least one intensity maximum. Therefore, in a preferred embodiment at least one local intensity maximum is established. The extrema may be established on any image function, such as wherein the image function is the unsharped image, the red channel image, the green channel image, or any combinations thereof. In a preferred embodiment the image function is the green channel.
Instead of using intensity or in addition to using intensity the method may include establishing at least one variance extremum in the image, preferably establishing at least one variance maximum in the image. For the same reasons as described with respect to the intensity at least one local variance maximum is established. The extrema may be established on any image function, such as wherein the image function is the unsharped image, the red channel image, the green channel image, or any combinations thereof. In a preferred embodiment the image function is the green channel.
Another method for establishing starting points may be random establishment of starting points, wherein the ultimative random establishment is establishing a start- ing point in substantially each pixel of the image. Of course a random establishment may be combined with any of the methods discussed above.
In yet a further embodiment the starting points may be established as grid points, such as evenly distributed or unevenly distributed grid points. Again this method may be combined with any of the methods of establishing extrema in the image and/or random establishment.
In a preferred method starting points are established by more than one of the methods described in order to increase the propability of assessing the correct localisation of lesions, if present, also with respect to images having less optimal illumination or presenting other forms of less optimal image quality. A problem that increases when fundus images are recorded decentrally and by less experienced staff than what is the case at specialised hospital departments.
In a more preferred embodiment the starting points are established by localising the local minima and/or maxima of the green channel image function, and let them act as starting points.
Growing
In a preferred embodiment the subset is established by growing a subset from a starting point. The growing of an object is used to segment an object from the background. The method may be used to grow both dark and bright objects, the algo- rithm for the one simply being an inversion of the algorithm of the other. The most essential part of the growing method is to limit the object with respect to the background. This limitation may be done in any way, for example by examining the visibility feature as described below, for a wide range of isocurves, or object depth, and then simply select the depth, which results in the highest possible visibility feature.
Thus, in this embodiment the establishment of subsets may be explained as growing q isocurves based on at least one growing feature of the area around the starting point, q being an integer of at least 1 , until the periphery of the candidate lesion area is established. That is for each starting point, a number of isocurves, wherein each isocurve may represent a candidate lesion area, is grown from the starting point. By other words, the growing process may give rise to extraction of morn that one subset, the number of subsets for example corresponding to equally distant isocurves. Preferably the smallest subset exceeds that of the starting point itself, and the area of the largest subset subceeds a predetermined value. However, normally only one of the isocurves establishes the periphery of the lesion, said isocurve being the iso- curve having the highest propability of being a candidate lesion area. The propability may for example be the highest visibility as described below. The subset of the image then implies the region contained by an isocurve resulting from the growing process, and the isocurve itself implies the periphery of the subset.
The growing algorithm is initialized in the starting point for the subset. Increasing the height in equidistant levels results in a number of grown isocurves. The step depth may be arbitrarily set, but is normally for practical reasons chosen to 1 , as the pixel levels origins from byte images, which has discrete integer values. In principle the algorithm may continue for the whole image starting from each starting point. However, again for practical reasons and because the lesions assessed in the image are known to have certain normal ranges of size, it is appropriate to apply at least one limitation to the growing, namely that candidate lesion areas above a certain predetermined area are not allowed. Furthermore, another limitation may be applied either additionally or alone, namely that the candidate lesion is limited by a minimum and a maximum number of isocurves.
The predetermined value described above is preferably in the range of from 0.1 to 1.0, such as in the range of from 0.2 to 0.8, such as in the range of from 0.3 to 0.6.
The growing procedure is shown by means of a flow chart in Figure 12.
In particular growing by use of watershed methods may be applied in the present invention. The watershed algorithm was introduced for the purpose of segmentation by Lantuejoul and Beucher. The idea of watershed is drawn by considering an image as a topographic surface. The image intensity (the gray level) is considered as an altitude with this point of view. A regional minimum is a connected plateau from which it is impossible to reach a point of lower gray level by an always-descending path. As the image surface is immerged, some of the flood areas (catchments ba- sins) will tend to merge. When two or more different flood areas are touched, infinitely tall dams (watershed lines) are constructed between them. When finished, the resulting networks of dams define the watershed of the image. In other words, the watershed lines partitions the image into nonintersecting patches, called catchments basins. Since each patch contains only one regional minimum, the number of patches is equal to the number of the regional minima in the image. In a preferred embodiment the pixel, with minimum value and which is closest to the center of mass of the region, becomes the origin for the growing algorithm.
An example of the process of growing is shown in the flow chart of Figure 7. The shown processing steps are described by the previously shown flowchart with respect to Figure 8.
Steps 1 , 3, 5 and 8: Find minimum unprocessed pixel, and include neighbour pixel with the same value.
Borders are not touching other regions.
Steps 2 and 6:
Include neighbor pixels not deviating more than tolerance from the starting pixel. Borders are not touching other regions so assign new region.
Steps 4 and 9:
Include neighbor pixels not deviating more than tolerance from the starting pixel. Borders are touching other regions so enlarge those.
Steps 7, 10, 11 , 12 and 13:
Find minimum unprocessed pixel, and include neighbor pixel with the same value.
Borders are touching other regions so enlarge those.
The sensitivity of the watershed algorithm may be adjusted by modifying the tolerance level, which makes it possible to except basis with an insignificant depth.
After having established the periphery of the candidate lesion area, the area may be filled, for example by simply performing a flood fill from the starting point to the peri- phery.
Validating subsets
The subsets may be validated before being corrected with respect to the back- ground. By validation is meant that each subset is subjected to a validation step to determine whether the candidate area should classify as a candidate lesion area before assigning a prior probability. The validation is preferably carried out by a feature different from the growing feature(s).
In one embodiment the validation step includes calculating the visibility of the candidate lesion area.
Background variation
One of the major problems in the detection of lesions in fundus images is the heavily varying backgrounds in which lesions are to be found. Some fundus areas have an almost constant background while some areas are extremely varying, e.g. areas with a visible nerve fiber layer or choroidal structures. According to one aspect of the invention, it has been found an advantage to correct, for example normalize, the visibility feature values obtained for each candidate lesion area according to a robust estimated background variation. The normalization procedure may be carried out in at least two steps, by
estimating at least one subset of the image, whereby each subset is a candi- date lesion area having a visibility, and after having assigned visibility features to the candidate lesion area
estimating the background variation, and correcting the visibility of the candidate lesion area.
However, in many embodiments the estimation of the subsets and estimation of the background variation is conducted in one step for each subset.
Background variation may be estimated by any suitable measure, accordingly, the background variation may be selected from the spatial and/or distributional properties of the original image, or any transformation of this, such as a gradiant image, a curvature image or a Laplace image. The spatial properties may for example be based on a Fourier transformation, cooccurrence matrix, and fractale dimension, and the distributional properties may be moments such as mean, variance skeewness or kurtosis.
The lesions may for example be described by a visibility feature as discussed above, which is based on the orientation weighted lesion border gradient observations, and in this embodiment it has been shown to be an advantage to normalize the lesions visibility feature with a mean and standard deviation estimate of the background gradient.
Thus, in a preferred embodiment the background variation is estimated by sequential identification of out-liers, for example by
d ) estimating the mean and standard deviation of the gradient magnitude pixels of an area defined surrounding the candidate lesion area, determining a lower threshold or above an upper threshold for the gradient magnitude pixels,
c2) iteratively removing an out-lying gradient magnitude pixel below a lower threshold or above an upper threshold, and re-estimating the mean and stan- dard deviation of the remaining gradient magnitude pixels, determining a second lower and a second upper threshold for the gradient magnitude pixels, until no out-lying gradient magnitude pixels are found.
c3) estimating the background variation from the mean and standard deviation estimated in c2).
In this embodiment the upper and lower threshold is determined as a constant multiplied with the standard deviation, for example as the standard deviation multiplied with at least 2, such as at least 3, such as at least 4, such as at least 5 or such as at least 6. It is preferred that at most one pixel is removed in each iteration step c2)
The area defined surrounding the candidate lesion area may include or exclude the candidate lesion area itself. In a preferred embodiment the gradient magnitude pixels in step d) include pixels from the candidate lesion area. The area surrounding the candidate area is normally selected to be in the range of from 0.25 to 1.0 of the expected optic nerve head area, such as from 0.5 to 1.0 of the expected optic nerve head area, such as from 0.6 to 1.0 of the expected optic nerve head area. Normally such an area corresponds to a number of pixels in the range of from 100 to 100,000 pixels, such as in the range of from 400 to 64,000 pixels, such as in the range of from 1000 to 50,000 pixels, such as in the range of from 5,000 to 25,000 pixels.
A preferred method for estimating the background variation is described in the fol- lowing, referring to the flow chart of Figure 11 :
The first step of the normalization is to estimate the background gradient of the lesion to correct. This estimation is done by an initial collection of the pixels within a given radius from the lesion center of mass. The amount of pixels is set in accor- dance with resolution of the image assessed. For most purposes the amount of pixels are set in a radius of from 50 to 100 pixels, vide for example Figure 9.
Crossing vessels and/or other lesions could influence a gradient estimate of the background, which calls for a robust estimation of the gradient background. The background region in the gradient image is shown in Figure 10, from which the in- cluence of crossing vessels and/or other lesions is clear.
An example of robust estimation is by continuously removing outliers. Therefore the gradient pixels in the defined background region are collected in an array, which is sorted according to their values. The mean and standard deviation of this array is then calculated. Now the value at each end of the array is compared to the calculated mean and standard deviation and in case the most deviating of the two is an outlier it is removed from the array.
After having removed that observation, the mean and standard deviation is recalculated, and the ends are checked again. This trimming continues until no more outliers are found. In this implementation we define an outlier as a value deviating more than two standard deviations from the mean. After having estimated the robust mean and standard deviation of the lesion background, the lesion visibility may be normalized using the standard formula:
ι , _ V Mβiadient
°~Gtaώent " " '
Other robust methods may be by filtering the image before collecting the intensities or by using robust estimators, such as the median instead of the mean and the mean absolute deviation instead of the standard deviation.
Once the background variation has been estimated, it is possible to correct the visibility of the candidate lesion area with the background variation, and compare the corrected visibility with a predetermined visibility threshold for lesions in that area, or correct a predetermined visibility threshold with the background variation, and compare the visibility of the candidate lesion area with the corrected predetermined visi- bility threshold. By any of these steps it is possible to assign a local threshold for lesions thereby increasing the specificity as well as the sensitivity of the methods for assessing the presence and/or absence of a lesion.
The steps of the methods may be conducted sequentially or in parallel for all sub- sets.
Some of the naturally occurring structures of the image may influence the assessment of lesions in a disadvantageous manner. Such structures are for example vessels, and the optic nerve head of a fundus image, since these structures present dark/bright areas in the image. Therefore, some adjustment to the structure is preferred.
Adjustment with respect to vessels
Various methods are known by which the vascular system may be isolated from the rest of the image content.
One method for tracking vessels is a method wherein use is made of the fact that the vessels are linear in a local neighbourhood wherein different filter matrices have different orientations. The localisation and orientation of such line elements may be determined using a template matching approach sometimes referred to as match filters).
Other methods for tracking vessels known to the person skilled in the art may be found in
Subhasis Chaudhuri et al, "Detection of Blood Vessels in Retinal Images Using Two- Dimensional Matched Filters", IEEE Transactions on Medical Imaging, Vol. 8, No. 3, September 1989.
Tolias y a et al "A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering, IEEE Transactions on Medical Imaging, April 1998, IEEE; USA vol. 17, No. 2, pages 263-273, ISSN: 0278-0062
Akita et al: "A computer method of understanding ocular fundus images" Pattern Recognition, 1982, UK, vol. 15, No. 6, pages 431-443, ISSN: 0031-3203 chapter 4.
A preferred method for tracking vessels is by tracking individual vessels from start- ing points representative for vessels, and iteratively grow the vessel network of the retina. A preferred embodiment hereof is described in PCT patent application No. PCT/DK02/00662.
In some of the embodiments according to the invention, it is preferred that the esti- mation of starting points and/or estimation of subsets is adjusted with respect to vessels appearing in the image. For many of these embodiments it is even more preferred that the estimation of candidate lesion areas is preceded by detection of vessels in the image.
In one embodiment adjustment of starting points means that starting points located in vessels are removed from the plurality of starting points representative for a lesion. In another embodiment subsets of the image having at least a portion of said subset located in a vessel are rejected as a candidate lesion area. Yet another method for adjusting with respect to the vessels is having detected the vessels of the image, the vessels appearing in the image are masked before establishing starting points. The vessels may be masked by any suitable method, for example by masking a number of pixels along the vessel, such as a number in the range of from 1 to 10 pixels.
Having identified the blood vessels in the image, it may be desirable to be able to distinguish between veins and arteries among the blood vessels. This can be important, for example in the diagnosis of venous beading and focal arteriolar narrow- ing.
The vascular system observed in the ocular fundus images is by nature a 2-dimensional projection of a 3-dimensional structure. It is quite difficult in principle to distinguish veins from arteries, solely by looking at isolated vessel segments. However, it has been discovered that effective separation can be achieved by making use of the fact that, individually, the artery structure and the vein vessel structures is each a perfect tree, (i. e., there is one unique path along the vessels from the heart to each capillary and back).
On the retina, the artery and vein structures are each surface filling, so that all tissue is either supplied or drained by specific arteries or veins, respectively.
A method for distinguishing veins from arteries is described in WO 00/65982, which is based on the realisation that crossings of vessel segments are, for practical pur- poses, always between a vein and an artery (i.e. crossings between arteries and arteries or between veins and veins are, for practical purposes, non-existent).
Masking optic nerve head
Another structure capable of interfering with the assessment of lesions is the optic nerve head. As opposed to vessels, the optic nerve head is not necessarily present in all images, this depending on the region acquired by the camera or CCD. Thus, in a preferred method the presence or absence of the optic nerve head area is assessed by a robust method before assessing the lesions. Such a method is for example described in PCT patent application No. PCT/DK02/00663.
In some of the embodiments according to the invention, it is preferred that the estimation of starting points and/or estimation of subsets is adjusted with respect to the optic nerve head appearing in the image. For many of these embodiments, it is even more preferred that the estimation of candidate lesion areas is preceded by detection of the optic nerve head in the image.
In one embodiment adjustment of starting points means that starting points located in the optic nerve head are removed from the plurality of starting points representative for a lesion. In another embodiment subsets of the image having at least a portion of said subset located in the optic nerve head are rejected as a candidate lesion area.
Yet another method for adjusting with respect to the optic nerve head is, when having detected the optic nerve head of the image, the optic nerve head appearing in the image is masked before establishing starting points. The optic nerve head may be masked by any suitable method, for example by masking a number of pixels around the optic nerve head, such as a number corresponding to a constant multiplied with the diameter of the optic nerve head, optionally of an expected diameter of the optic nerve head, said constant being in the range of from 1.1 to 2.0, preferably about 1.5.
In a further embodiment the method according to the present includes weighting the visibility in relation to local intensity variation around the lesion in order to reduce false positive lesions due to for example nerve fibre layers, untracked vessels and reflections in the vitreous body. A common feature of these false positives is that the local intensity variation around the lesion is relatively large, contrary to the majority of true lesions, which are located in homogenous areas.
When two grown lesions are overlapping, one must be embedded in the other, as the iso-intensity curves defining the edges of the lesions cannot cross. In this case, the background may be defined slightly differently, in order to avoid that the large lesion is interpreted as background when evaluating the smaller interior lesion.
When considering overlapping lesions, the "foreground" may be the entire con- nected component in the les-image consisting of positive visibility pixels. The background will be defined as above, but relative to this foreground-region. Thus, all lesions that are overlapping will have the same background region. The principle is illustrated in Figure 15.
If the background is not defined in this way, regions grown around fovea and in large lesions may be misclassified as these usually have overlapping grown regions.
Therefore, it is preferred that the visibility is weighted with a measure for the homo- genicity of the local background. The local or immediate background may be defined as the band of pixels that are more than B.n and at most Bin +Bmιt pixels from the lesion. The distance between a point and the lesion is defined as the smallest distance between the point and a pixel within the lesion. Thus Bout is the width of the background-band around the lesion, and Bin is the width of the band separating the lesion and the background. The principle is illustrated by the front-page figure and in Figure 14. The parameters Bin and Bout should be scaled with the image scale.
Pixels may be excluded from the background if they are located either
1. On a tracked vessel, 2. Outside the ROI,
3. Closer than Bin to another lesion, which has visibility more than Tv , where v is the visibility of the current lesion, and T is a tolerance parameter.
The rationale behind excluding pixels due to the first and second criteria should be clear. The 3rd criteria is employed to avoid that true lesions, which are located close together, influence each others backgrounds, and the parameter T may be used to govern the tolerance of this restriction. It has been found that setting T = 0 is significantly better than when the criteria is not used at all (corresponding to T = ∞ ); by choosing a value of T around 1.0 it is avoided that lesions with small visibility are removed from the background band, which seems to be most sensible in practice.
In one embodiment the ratio of the mean green channel intensity in the background and in the lesion may be used to discriminate true and false lesions. For example, a fixed threshold seems most appropriate, * green.out
~~~ > "' IRl tthlresh
H' green, in where IRthresh preferably is less than 1.1 such as between 1.01 and 1.09, preferably between 1.04 and 1.08 to discriminate a true lesion from a false lesion, a false le- sion having mean intensity ration below IRtresh. Here μp.eenfiUt and / mil is the mean of the green channel in the immediate background and in the lesion, respectively.
In another embodiment the variance measure of a part of the image is estimated by
c4) defining a band of pixels of a predetermined width and being at a predetermined distance from the candidate lesion area,
c5) estimating the mean and standard deviation of the intensity of the band, and
c6) estimating the variance measure of a part of image from the mean and standard deviation estimated in c5.
This may be exemplified by ratio of the standard deviation in the background and in the lesion is used to detect typical false positive lesions, namely those located in vessels and near reflections. It has been found that a variance-weighted visibility measure is indeed a useful approach,
Figure imgf000037_0001
Here v is the usual normalised visibility measure, and σ2 Poi ,m respectively σ1 poi , out is the variance of the intensities in the poly-smoothed image inside the lesion and in the background, respectively.
The variance-weighted visibility measure is then compared with the predetermined visibility threshold as described above. In a preferred embodiment it is the corrected visibility that is weighted as described above or the weighted visibility is compared with the corrected threshold.
Applications
In the following examples of various applications of the method according to the invention are discussed.
Once the presence or absence of lesions has been assessed, the information regarding the lesion may be used for various purposes.
Accordingly, the present invention further relates to a method for diagnosing the presence or absence of a disease in an individual from a fundus image of at least one eye of said individual comprising
- assessing the presence or absence of at least one lesion by the method as defined above,
- grading the fundus image with respect to number and/or size and/or placement of lesions,
diagnosing the presence or absence of the disease.
In particular this method relates to the diagnosis and prognosis of diabetic retinopathy. Diabetic retinopathy is a condition wherein the person has at least one of the following symptoms:
• Hard or soft exudates, microaneurysms, haemorrhages • Macular edema • Venous beading, venous loops
• Intraretinal microvascular abnormalities (IRMA), new vessels on disk (NVD), new vessels elsewhere (NVE)
• Pre-retinal haemorrhage, fibrosis/retinal traction, retinal detachment
Furthermore, the invention may be used to quantify the probability of a person having diabetic retinopathy. The presence or absence of true lesions has an impact on the probability of diabetic retinopathy. Accordingly, the present invention also relates to a method for assessing the probability of a diagnosis of diabetic retinopathy in an individual from a fundus image of at least one eye of said individual, comprising
a) estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability,
b) establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) correcting the probability of the candidate lesion area with the structural infor- mation from said individual, comparing the corrected probability with a predetermined probability threshold for lesions, or correcting a predetermined probability threshold with the structural information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined probability threshold, only correcting with structural information,
d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) optionally repeating steps a) to d) until all candidate lesion areas have been classified,
obtaining a diagnosis of diabetic retinopathy having an initial probability, correcting the initial probability of the diagnosis with the clinical information, thereby obtaining the probability of the diagnosis diabetic retinopathy. In another aspect the invention relates to a method for classifying a fundus image comprising
- assessing the presence or absence of at least one lesion by the method as de- fined above,
- grading the fundus image with respect to number and/or size and/or placement of lesions,
- classifying the fundus image into at least two classes.
Normally several classes are used, wherein the images are graded both with respect to the number of lesions, and to the distance of the lesions to fovea.
System
In another aspect the invention further relates to a system for assessing the presence or absence of lesions in a fundus image. Thus, the system according to the invention may be any system capable of conducting the method as described above as well as any combinations thereof within the scope of the invention. Accordingly, the system may include algorithms to perform any of the methods described above.
A graphical user interface module may operate in conjunction with a display screen of a display monitor. The graphical user interface may be implemented as part of the processing system to receive input data and commands from a conventional keyboard and mouse through an interface and display results on a display monitor. For simplicity of the explanation, many components of a conventional computer system have not been discussed such as address buffers, memory buffers, and other standard control circuits because these elements are well known in the art and a de- tailed description thereof is not necessary for understanding the present invention.
Pre-acquired image data can be fed directly into the processing system through a network interface and stored locally on a mass storage device and/or in a memory. Furthermore, image data may also be supplied over a network, through a portable mass storage medium such as a removable hard disk, optical disks, tape drives, or any other type of data transfer and/or storage devices which are known in the art.
One skilled in the art will recognize that a parallel computer platform having multiple processors is also a suitable hardware platform for use with a system according to the present invention. Such a configuration may include, but not be limited to, parallel machines and workstations with multiple processors. The processing system can be a single computer, or several computers can be connected through a communications network to create a logical processing system.
Any of the algorithms of the systems described above may be adapted to the various variations of the methods described above.
The present system allows the grader, that is the person normally grading the im- ages, to identify the lesions more rapidly and securely. Also, the present system allows an automatic detection of lesions and other pathologies of the retina without interference from the vessels, again as an aiding tool for the traditional grader.
By use of the present system it is also possible to arrange for recordation of the im- ages at one location and examining them at another location. For example the images may be recorded by any optician or physician or elsewhere and be transported to the examining specialist, either as photos or the like or on digital media. Accordingly, by use of the present system the need for decentral centers for recording the image, while the maintaining fewer expert graders could be realised.
Furthermore, in addition to the communication of images and medical information between persons involved in the procedure, the network may carry data signals including control or image adjustment signals by which the expert examining the images at the examining unit directly controls the image acquisition occurring at the recordation localisation, i.e. the acquisition unit. In particular, such command signals as zoom magnification, steering adjustments, and wavelength of field illumination may be selectively varied remotely to achieve desired imaging effect. Thus, questionable tissue structures requiring greater magnification or a different perspective for their elucidation may be quickly resolved without ambiguity by varying such con- trol parameters. Furthermore, by switching illumination wavelengths views may be selectively taken to represent different layers of tissue, or to accentuate imaging of the vasculature and blood flow characteristics. In addition, where a specialized study such as fluorescence imaging is undertaken, the control signals may include time varying signals to initiate stimulation with certain wavelengths of light, to initiate im- aging at certain times after stimulation or delivery of dye or drugs, or other such precisely controlled imaging protocols. The digital data signals for these operations may be interfaced to the ophthalmic equipment in a relatively straightforward fashion, provided such equipment already has initiating switches or internal digital circuitry for controlling the particular parameters involved, or is capable of readily adapting electric controls to such control parameters as system focus, illumination and the like.
Also, the examining expert could be able to exert some treatment in the same remote manner. It will be understood that the imaging and ophthalmic treatment in- strumentation in this case will generally include a steering and stabilization system which maintains both instruments in alignment and stabilized on the structures appearing in the field of view. However, in view of the small but non-negligible time delays still involved between image acquisition and initiation of diagnostic or treatment activity at the examination site, in this aspect of the invention, the invention contemplates that the system control further includes image identification and correlation software which allows the ophthalmologist at site to identify particular positions in the retinal field of view, such as pinpointing particular vessels or tissue structures, and the image acquisition computer includes image recognition software which enables it to identify patterns in the video frames and correlate the identified position with each image frame as it is acquired at the acquisition site. For example, the image recognition software may lock onto a pattern of retinal vessels. Thus, despite the presence of saccades and other abrupt eye movements of the small retinal field which may occur over relatively brief time intervals, the ophthalmic instrumentation is aimed at the identified site in the field of view and remote treatment is achieved.
In addition to the foregoing operation, the invention further contemplates that the images provided by acquisition unit are processed for photogrammetric analysis of tissue features and optionally blood flow characteristics. This may be accomplished as follows. An image acquired at the recordation unit is sent to an examination unit, where it is displayed on the screen. As indicated schematically in the figure, such image may include a network of blood vessels having various diameters and lengths. These vessels include both arterial and venous capillaries constituting the blood supply and return network. At the examination unit, the workstation may be equipped with a photogrammetric measurement program which for example may enable the technician to place a cursor on an imaged vessel, and moving the cursor along the vessel while clicking, have the software automatically determine the width of the vessel and the subvessels to which it is connected, as well as the coordinates thereof.
The software for noting coordinates from the pixel positions and linking displayed features in a record, as well as submodules which determine vessel capacities and the like, are straightforward and readily built up from photogrammetric program techniques. Work station protocols may also be implemented to automatically map the vasculature as described above, or to compare two images taken at historically different times and identify or annotate the changes which have occurred, highlighting for the operator features such as vessel erosion, tissue which has changed colour, or other differences. In addition, a user graphical interface allows the specialist to type in diagnostic indications linked to the image, or to a particular feature ap- pearing at a location in the image, so that the image or processed version of it becomes more useful.
Thus, a very precise and well-annotated medical record may be readily compiled and may be compared to a previously taken view for detailed evidence of changes over a period of time, or may be compared, for example, to immediately preceding angiographic views in order to assess the actual degree of blood flow occurring therein. As with the ophthalmologist's note pad entries at examination unit, the measurement entries at examination unit become an annotated image record and are stored in the central library as part of the patient's record.
Unlike a simple medical record system, the present invention changes the dynamics of patient access to care, and the efficiency of delivery of ophthalmic expertise in a manner that solves an enormous current health care dilemma, namely, the obstacle to proper universal screening for diabetic retinopathy. A basic embodiment of the invention being thus disclosed and described, further variations and modifications will occur to those skilled in the art, and all such variations and modifications are encompassed within the scope of the invention as defined in the claims appended hereto.
Data carrier
The invention further includes a data carrier, such as a CD-ROM, said data carrier including algorithms to perform any of the methods described above, where the data carrier is operably connected to a data system.
Examples
Example 1
Estimation of the visibility distribution
400 fundus images were graded by expert graders. These images were used to formulate the two visibility distributions f(v 10) and f(v 11) . It is possible to estimate these using the data, for which all true lesions have been marked by a panel of ophthalmic experts. Histograms of the visibility distributions for true and false lesions for both bright and dark lesions are displayed in Figures 16, 17, 18 and 19. The overlaid Gamma distributions are fitted by the maximum likelihood method, and fit the data reasonably well.
Recall that the Gamma-density with shape parameter a > 0 and inverse-scale parameter λ > 0 is given by fτ (x; a, λ) = T(dyl λaxa-χe~λx , x > 0, where r(α) is the Gamma function. The fitted visibility densities are thus given by this expression, with the parameters summarised in Table 2. Table 2. Estimated parameters of the Gamma distributions fitted to the visibility values for dark and bright lesions in the data (Lesion 1,0,6,15).
Type State λ
False 0.962 15.4
Dark
True 0.815 1.04
False 1.21 7.41
Bright
True 1.24 1.09
The fitted distributions are based on the 400 images.
Example 2
Derivation of the expression for the posterior probability
In the spatial interaction model (5), a first precondition is that the probabilities in the model must sum to one, and therefore q0 is given by
1= J P(A = α) α,e{0,l) ι=0,... ,k O
Figure imgf000045_0001
In the last equality, the fact that
Figure imgf000045_0002
is used. To calculate the posterior odds of -4- = 1 it is proceeded as follows: f(vi \ai)
Figure imgf000046_0001
Figure imgf000046_0002
Similarly, the following is obtained
Figure imgf000046_0003
The final expression (5) is now obtained by noting that
P(Aϋ =l \v) = P(A0 =l,v) P(Aϋ =0\v) P(A0 =0,v) ' and inserting the two expressions above.
Example 3
Data: 199 Macula images from the images, 1240 lesions.
These images have been annotated by a consensus of 6 expert graders. They have graded as bright lesions, the exudates, the cotton wool spots and the drusens.
This data material is used in this and the following examples to show models for estimating the lesions.
A circle of ^ DD-radius centred on the fovea is considered where the probability for a lesion being true is Ratio times greater than outside this circle (see Figure 6). For the bright lesions, we consider a probability of 0.0021 being a true lesion outside the circle. It corresponds to the number of true lesions divided by the number of seed- points in the 400 images. Then we consider Ratio times this probability inside the circle. Different values of this Ratio have been tested. Results
The following study has been done with a high probability circle of 1-DD radius (77=1).
Figure imgf000047_0001
The best value seems to be around 4 or 5.
Usually the false-positives are located far from the fovea, next to the 2 main vessels. Therefore, the more we put weight inside the high probability area, the less we get false-positives outside.
Then, the size of the diameter of the high probability circle has been tested: The following study has been done with a Ratio of 5 (which appeared to be the best value).
Figure imgf000047_0002
This first model, while combined with a visibility measure has shown an improvement of 3.5% on a lesion level with the best parameters (a high probability circle of 1.4-DD-radius and a Ratio of 5). On a qualitative level, this model detects more lesions close to the fovea but misses some just located outside the high probability circle. Example 4
2nd model using gaussians
The data material described in Example 3 was used. Principle
Lesion — FovecA f " Lesion - ONHΪ λ
Pr /or Pr obability = l + R exp 1 - exp p(type)
2(ηDD)2 2(pDD)2
with p(type) -0.0021 if type= bright p(type)=0.0093 iftype=dark The first term of this model represents the concentration of true lesions around the fovea. The second one represents the ONH. Next to the fovea, this prior probability is close to (1+R)p(type) and close to 0 next to the ONH.
According to the 1st model, we should have (l + R) p(type) =5p(type) <^> R=4 If we consider a lesion close to the fovea, 5 corresponds to the best value of the
Ratio on the AUC criteria. p=1.5 and 77=1.4
Results
Figure imgf000048_0001
This model is more robust to the ONH-detection. Example 5
3rd model based on the diffusion model.
The data material described in Example 3 was used. In this example, the EDD is used instead of the DD for all the computations. By using it, the results (see 2.2) are as good and it is more robust to the ONH-detectiόn.
Vτior Yτobability = [l + i. FuncFovea(Lesion, Fovea, EDD,η)] 1 - pitype)
Figure imgf000049_0001
with p(type)= 0.0021 if type= bright p(type) = 0.0093 if type - dark
Several models following the above structure were tested:
These functions must be equal to 1 at the fovea and converge towards 0 for a lesion far from the fovea.
' "Lesion — FoveaW
1) FuncFovea(Lesion, Fovea, EDD, η) = ex
2(ηEDD)Λ
Figure imgf000049_0002
Improvement of 5.76% in comparison with the usual visibility measure
Example 6
Testing on other sets of data
Steno data at an image level from Steno Hospital, Denmark. 309 images where the ONH and the fovea can be marked off.
Figure imgf000049_0003
NB: g4-F means that a gaussian to the power 4 has been used around the fovea. g2-ONH means that a usual gaussian has been used around the ONH in the given model.
No relevant differences can be noticed as compared to the first data set.

Claims

1. A method for assessing the presence or absence of lesion(s) in a fundus image from an individual, comprising
a) estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability,
b) establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) correcting the probability of the candidate lesion area with the information from said individual, comparing the corrected probability with a predetermined prob- ability threshold for lesions, or correcting a predetermined probability threshold with the information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined probability threshold,
d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) optionally repeating steps a) to d) until all candidate lesion areas have been classified.
2. The method according to claim 1 , wherein the image is presented on a medium selected from dias, paper photos or digital photos.
3. The method according to claim 1 or claim 2, wherein the image is a colour image.
4. The method according to any of the preceding claims, wherein the clinical information is one or more clinical information types selected from information about age of said individual, information about sex of said individual, information about diseases of said individual, and information about at least one clinical test of said individual.
5. The method according to any of the preceding claims, wherein the structural information is selected from regional information, lesion information, and vessel information.
6. The method according to claim 5, wherein the lesion information is selected from information about number of lesions in another fundus of the same individual, information about previous lesion(s) in the same and/or another fundus of said individual, information of other lesions in the same fundus image, informa- tion of a lesion in the same subset of another fundus image of the same fundus, and information about number of candidate lesion areas.
7. The method according to claim 5, wherein the regional information is the information about the region of the fundus image comprising the at least one subset of the image.
8. The method according to any of the preceding claims, wherein the green channel is used for assessing the presence or absence of lesion(s).
9. The method according to any of the preceding claims, wherein the subset(s) are estimated by establishing a plurality of starting points, said starting points being representative for lesions, and estimating each subset around a starting point.
10. The method according to any of the preceding claims, wherein the starting points are established in extremas of the image.
11. The method according to any of the preceding claims, wherein the subset of the image is a connected subset.
12. The method according to any of the preceding claims, wherein the subset of the image is estimated by filtering the image.
13. The method according to any of the preceding claims, wherein estimating the subset of the image comprises establishing the periphery of the subset.
14. The method according to any of the preceding claims, wherein the subset of the image is estimated by growing an area around the starting point.
15. The method according to claim 14, wherein q isocurves based on at least one growing feature of the area are grown around the starting point, q being an integer of at least 1 , until the periphery of the candidate lesion area is established.
16. The method according to claim 15, wherein the probability of the area within the isocurves is estimated, and the isocurve having the highest probability estab- lishes the periphery of the candidate lesion area.
17. The method according to any of claims 13-16, wherein the validation of the subset is conducted by a feature different from the growing feature(s).
18. The method according to any of the preceding claims, wherein the probability of an area is determined as a vector of features including intensity, visibility of the candidate lesion compared to the visibility of the vessels, visibility of the edge of the candidate lesion, colour information of the candidate lesion, variance measure of a part of the image and/or a variance measure of the image.
19. The method according to any of the preceding claims, wherein the subset of the image is estimated by active contour model.
20. The method according to any of the preceding claims 9-19, wherein the identifi- cation of starting point is adjusted with respect to vessels appearing in the image.
21. The method according to any of the preceding claims, wherein the estimation of subset is adjusted with respect to vessels appearing in the image.
22. The method according to any of the preceding claims, wherein the estimation of subset(s) is preceded by detection of vessels in the image.
23. The method according to any of the preceding claims 9-22, wherein starting points located in vessels are removed from the plurality of starting points representative of a lesion.
24. The method according to any of the preceding claims, wherein subsets of the image having at least a portion of said subset located in a vessel is rejected as a candidate lesion area.
25. The method according to any of the preceding claims 9-24, wherein vessels appearing in the image are masked before establishing starting points.
26. The method according to any of the preceding claims 9-25, wherein the identification of starting point is adjusted with respect to optic nerve head appearing in the image.
27. The method according to any of the preceding claims 9-26, wherein the estimation of subset is adjusted with respect to optic nerve head appearing in the image.
28. The method according to any of the preceding claims, wherein the estimation of subset(s) and/or identification of a starting point is preceded by detection of a region comprising the optic nerve head.
29. The method according to any of the preceding claims 1-28, wherein starting points located in optic nerve head are removed from the plurality of starting points representative for a lesion.
30. The method according to any of the preceding claims 1-29, wherein subsets of the image having at least a portion of said subset located in the optic nerve head are rejected as a candidate lesion area.
31. The method according to any of the preceding claims 1-30, wherein the region of the optic nerve head is masked before estimating the subset and/or establishing the starting point.
32. The method according to any of the preceding claims, wherein the area surrounding the candidate lesion area comprises the candidate lesion area.
33. The method according to any of the preceding claims, wherein the area sur- rounding the candidate lesion area excludes the candidate lesion area.
34. The method according to any of the preceding claims, wherein the area surrounding the candidate area is selected to be in the range of from 0.25 to 1.0 the expected optic nerve head area, such as from 0.5 to 1.0 the expected optic nerve head area, such as from 0.6 to 1.0 the expected optic nerve head area.
35. The method according to any of the preceding claims 7-34, wherein the information about region of the fundus image comprising the at least one subset of the image comprises information about distance from anatomical features in the fundus image to the subset.
36. The method according to claim 35, wherein the anatomical features are selected from fovea, optical nerve head and vessels, such as main arcades.
37. The method according to claim 36, wherein a function of distance to fovea is used for correcting in step c).
38. The method according to any of the preceding claims, wherein information about previous lesion(s) in the same or another fundus of said individual com- prises information about previous lesion in substantially the same image subset of the fundus.
39. The method according to any of the preceding claims, wherein information about other lesions and/or candidate lesion(s) in the same fundus image com- prises information about distance to at least one other lesion and/or candidate lesion(s) in the same fundus image.
40. The method according to any of the preceding claims, wherein information about other lesions and/or candidate lesion(s) in the same fundus image com- prises information about numbers of other lesions and/or candidate lesion(s) in the same fundus image, such as numbers of other lesions and/or candidate le- sion(s) within a predetermined region of the same fundus image.
41. The method according to any of the preceding claims, wherein information about other lesions and/or candidate lesion(s) in the same fundus image comprises information about other lesions and/or candidate lesion(s) within a predetermined distance from said subset.
42. The method according to any of the preceding claims, wherein information about other lesions and/or candidate lesion(s) in the same fundus image comprises information about the size of at least one other lesion and/or candidate lesion(s).
43. The method according to any of the preceding claims, wherein information about other lesions and/or candidate lesion(s) in the same fundus image consists of information about other lesions and/or candidate lesion(s) in a predetermined distance from fovea in the image.
44. The method according to any of the preceding claims, wherein information about other lesions in the same fundus image comprises information about type of lesion(s), such as light and dark lesions.
45. The method according to any of the preceding claims, wherein information about other lesions in the same fundus image comprises information about the probability of at least one other lesion.
46. The method according to any of the preceding claims, wherein information about at least one blood test of said individual comprises information about blood cholesterol level, blood glucose level, and HblAc level.
47. The method according to any of the preceding claims, wherein information about disease in said individual comprises information about diseases selected from diabetes, atherosclerosis, and hypertension.
48. The method according to any of the preceding claims, wherein the background variation of the image in an area surrounding the candidate lesion area is estimated, and the probability of the candidate lesion area is further corrected with the background variation before the further corrected probability is compared with the predetermined probability threshold for lesions, or the predetermined probability threshold is further corrected with the background variation before comparing the probability of the candidate lesion area with the further corrected predetermined probability threshold.
49. The method according to claim 48, wherein the background variation is selected from the spatial and/or distributional properties of the original image, or any transformation of this, such as a gradient image, a curvature image or a Laplace image.
50. The method according to claim 48 or 49, wherein the background variation is estimated by
d) estimating the mean and standard deviation of the gradient magnitude pixels of an area defined surrounding the candidate lesion area, determining a lower threshold or above an upper threshold for the gradient magnitude pixels,
c2) iteratively removing an out-lying gradient magnitude pixel below a lower threshold or above an upper threshold, and re-estimating the mean and standard deviation of the remaining gradient magnitude pixels, determining a second lower and a second upper threshold for the gradient magnitude pixels, until no outlying gradient magnitude pixels are found.
c3) estimating the background variation from the mean and standard deviation estimated in c2).
51. The method according to any of the preceding claims 18-50, wherein the variance measure of a part of the image is estimated by
c4) defining a band of pixels of a predetermined width and being at a predetermined distance from the candidate lesion area, c5) estimating the mean and standard deviation of the intensity of the band, and
c6) estimating the variance measure of a part of image from the mean and standard deviation estimated in c5).
52. The method according to claim 50, wherein the upper and lower threshold is determined as a constant multiplied with the standard deviation.
53. The method according to claim 50 or 51, wherein the gradient magnitude pixels in step d ) include pixels from the candidate lesion area.
54. The method according to claim 50 or 51 , wherein at most one pixel is removed in step c2).
55. A method for diagnosing the presence or absence of a disease in an individual from a fundus image of at least one eye of said individual comprising
assessing the presence or absence of at least one lesion by the method as de- fined in any of the claims 1-54,
grading the fundus image with respect to number and/or size of lesions,
diagnosing the presence or absence of the disease.
56. The method according to claim 55, wherein the disease is diabetic retinopathy.
57. A method for assessing the probability of a diagnosis of diabetic retinopathy in an individual from a fundus image of at least one eye of said individual, com- prising
a) estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability, b) establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) correcting the probability of the candidate lesion area with the structural information from said individual, comparing the corrected probability with a predetermined probability threshold for lesions, or correcting a predetermined probability threshold with the structural information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined probability threshold, only correcting with structural information,
d) classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) optionally repeating steps a) to d) until all candidate lesion areas have been classified,
obtaining a diagnosis of diabetic retinopathy having an initial probability, correcting the initial probability of the diagnosis with the clinical information, thereby obtaining the probability of the diagnosis diabetic retinopathy.
58. The method according to claim 57, wherein the method comprises one of more of the features of claims 2-54.
59. A method for classifying a fundus image comprising
- assessing the presence or absence of at least one lesion by the method as defined in any of the claims 1-54,
- grading the fundus image with respect to number and/or size of lesions,
- classifying the fundus image into at least two classes.
60. A system for assessing the presence or absence of lesion(s) in a fundus image of an individual, comprising a) an algorithm for estimating at least one subset of the image, whereby each subset is a candidate lesion area having a probability,
b) an algorithm for establishing information from said individual, said information comprising at least one information type selected from the following: clinical information and structural information,
c) an algorithm for correcting the probability of the candidate lesion area with the information from said individual, comparing the corrected probability with a predetermined probability threshold for lesions, or correcting a predetermined probability threshold with the information from said individual, comparing the probability of the candidate lesion area with the corrected predetermined probability threshold,
d) an algorithm for classifying the candidate lesion area detected in a) with respect to the threshold obtained in step c) as a lesion or not,
e) an algorithm for optionally repeating steps a) to d) until all candidate lesion areas have been classified.
61. A data carrier including algorithms to perform one or more of the methods as defined in any of claims 1-59.
PCT/DK2004/000188 2003-03-20 2004-03-19 Assessment of lesions in an image WO2004082453A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DKPA200300433 2003-03-20
DKPA200300433 2003-03-20
US46179303P 2003-04-11 2003-04-11
US60/461,793 2003-04-11

Publications (2)

Publication Number Publication Date
WO2004082453A2 true WO2004082453A2 (en) 2004-09-30
WO2004082453A3 WO2004082453A3 (en) 2005-08-11

Family

ID=33031173

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DK2004/000188 WO2004082453A2 (en) 2003-03-20 2004-03-19 Assessment of lesions in an image

Country Status (1)

Country Link
WO (1) WO2004082453A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009045460A1 (en) * 2007-10-03 2009-04-09 Siemens Medical Solutions Usa, Inc. System and method for lesion detection using locally adjustable priors
GB2467840A (en) * 2009-02-12 2010-08-18 Univ Aberdeen Detecting disease in retinal images
EP2319393A1 (en) * 2008-07-31 2011-05-11 Canon Kabushiki Kaisha Eye diagnosis supporting apparatus, method therefor, program, and recording medium
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
WO2018138564A1 (en) * 2017-01-27 2018-08-02 Sigtuple Technologies Private Limited Method and system for detecting disorders in retinal images
CN113425248A (en) * 2021-06-24 2021-09-24 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999004244A1 (en) * 1997-07-17 1999-01-28 Accumed International, Inc. Inspection system with specimen preprocessing
US5982943A (en) * 1992-09-14 1999-11-09 Startek Eng. Inc. Method for determining background or object pixel for digitizing image data
WO2000032106A1 (en) * 1998-07-02 2000-06-08 Wake Forest University Virtual endoscopy with improved image segmentation and lesion detection
US20020052551A1 (en) * 2000-08-23 2002-05-02 Sinclair Stephen H. Systems and methods for tele-ophthalmology
US20030053671A1 (en) * 2001-05-10 2003-03-20 Piet Dewaele Retrospective correction of inhomogeneities in radiographs

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982943A (en) * 1992-09-14 1999-11-09 Startek Eng. Inc. Method for determining background or object pixel for digitizing image data
WO1999004244A1 (en) * 1997-07-17 1999-01-28 Accumed International, Inc. Inspection system with specimen preprocessing
WO2000032106A1 (en) * 1998-07-02 2000-06-08 Wake Forest University Virtual endoscopy with improved image segmentation and lesion detection
US20020052551A1 (en) * 2000-08-23 2002-05-02 Sinclair Stephen H. Systems and methods for tele-ophthalmology
US20030053671A1 (en) * 2001-05-10 2003-03-20 Piet Dewaele Retrospective correction of inhomogeneities in radiographs

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CAN A ET AL: "RAPID AUTOMATED TRACING AND FEATURE EXTRACTION FROM RETINAL FUNDUS IMAGES USING DIRECT EXPLORATORY ALGORITHMS" IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 3, no. 2, June 1999 (1999-06), pages 125-138, XP000833587 ISSN: 1089-7771 *
MICHAEL GOLDBAUM ET AL.: "Automated diagnosis and image understanding with object extraction, object classification, and inferencing in retinal images" IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, PROCEEDINGS, vol. 3, 1996, pages 695-698, XP002274615 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009045460A1 (en) * 2007-10-03 2009-04-09 Siemens Medical Solutions Usa, Inc. System and method for lesion detection using locally adjustable priors
US7876943B2 (en) 2007-10-03 2011-01-25 Siemens Medical Solutions Usa, Inc. System and method for lesion detection using locally adjustable priors
EP2319393A1 (en) * 2008-07-31 2011-05-11 Canon Kabushiki Kaisha Eye diagnosis supporting apparatus, method therefor, program, and recording medium
EP2319393A4 (en) * 2008-07-31 2015-01-07 Canon Kk Eye diagnosis supporting apparatus, method therefor, program, and recording medium
GB2467840A (en) * 2009-02-12 2010-08-18 Univ Aberdeen Detecting disease in retinal images
GB2467840B (en) * 2009-02-12 2011-09-07 Univ Aberdeen Disease determination
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
WO2018138564A1 (en) * 2017-01-27 2018-08-02 Sigtuple Technologies Private Limited Method and system for detecting disorders in retinal images
CN113425248A (en) * 2021-06-24 2021-09-24 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113425248B (en) * 2021-06-24 2024-03-08 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium

Also Published As

Publication number Publication date
WO2004082453A3 (en) 2005-08-11

Similar Documents

Publication Publication Date Title
US7583827B2 (en) Assessment of lesions in an image
WO2018116321A2 (en) Retinal fundus image processing method
EP3923190A1 (en) A system and method for evaluating a performance of explainability methods used with artificial neural networks
Phridviraj et al. A bi-directional Long Short-Term Memory-based Diabetic Retinopathy detection model using retinal fundus images
Naramala et al. Enhancing diabetic retinopathy detection through machine learning with restricted boltzmann machines
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
WO2004082453A2 (en) Assessment of lesions in an image
WO2003030073A1 (en) Quality measure
WO2003030075A1 (en) Detection of optic nerve head in a fundus image
Baskaran et al. Performance Analysis of Deep Learning based Segmentation of Retinal Lesions in Fundus Images
Niemeijer Automatic detection of diabetic retinopathy in digital fundus photographs
DK1444635T3 (en) Assessment of lesions in an image
WO2003030101A2 (en) Detection of vessels in an image
Patil et al. Screening and detection of diabetic retinopathy by using engineering concepts
Waly et al. Detection of Retinal Blood Vessels by using Gabor filter with Entropic threshold
Kilgannon A Machine Learning System for Glaucoma Detection using Inexpensive Machine Learning
Raju DETECTION OF DIABETIC RETINOPATHY USING IMAGE PROCESSING
Hakeem et al. Inception V3 and CNN Approach to Classify Diabetic Retinopathy Disease
Tavakoli et al. A comprehensive survey on computer-aided diagnostic systems in diabetic retinopathy screening
Halder et al. Enhancing and segmenting retinal vessels and measuring tortuosity using PSO optimization
Naeem Oleiwi Al-Mahdi CNN googlenet and alexnet architecture deep learning for diabetic retinopathy image processing and classification
Abualigah et al. Hybrid Classification Approach Utilizing DenseUNet+ for Diabetic Macular Edema Disorder Detection.
Kiran et al. Diabetics Recognition Using Retina Images
Romero Oraa Automatic analysis of retinal images to aid in the diagnosis and grading of diabetic retinopathy
Araújo Diabetic Retinopathy Grading In Color Eye Fundus Images

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase